Oct 2024

Austria

Law Over Borders Comparative Guide:

Artificial Intelligence

Contributing Firm

Introduction

In Austria, numerous companies are active in the field of AI, but only a small proportion of them are concretely engaged in its development (see E. Prem and S. Ruhland, “AI in Austria: An approach based on economic statistical analyses”, retrieved from: https://www.bmk.gv.at/en.html). Most companies in the AI environment come from the field of software development or offer corresponding data processing, followed by business and market consultancies that use their own software to analyse corporate data, stock market data, etc. 

New focal points have emerged in the area of production and Industry 4.0 (an Austrian alliance of political, economic and scientific stakeholders actively involved in shaping the future of Austrian production and work), e.g. for predictive maintenance. 

The Austrian state supported these research activities between 2012 and 2020 with a total of EUR 910 million in funding (AIM AT 2030). Despite this, the potential of AI is not yet being fully exploited by Austrian companies. In Austrian companies, AI still plays little or no role, despite a positive general attitude. This was the result of a survey of 168 managers in Austria conducted by Deloitte in 2023 (see www2.deloitte.com/content/dam/Deloitte/at/Documents/presse/at-deloitte-ai-quick-study-2023.pdf).

Apart from production and industry, Austria has also a large number of research institutions that deal with AI. Particularly in key AI subfields such as machine learning, symbolic processes, robotics and autonomous systems, Austrian universities and research institutions have a high level of competence and enjoy a good reputation worldwide. One of the famous institutions for AI research is the Laboratory for Artificial Intelligence (AI LAB) of the Institute of Technology (LIT) at the Johannes Kepler University in Linz that is headed by Sepp Hochreiter (a pioneer in the field of AI). It offers a bachelor’s degree program in Artificial Intelligence.

Top

1 . Constitutional law and fundamental human rights

Top

1.1. Domestic constitutional provisions

Currently, there are no specific constitutional provisions regulating AI systems and no decisions referring to them. 

However, within the framework of the AIM Strategy 2030 presented in summer 2021, the Austrian federal government intends to create a legal framework for placing AI on the market, putting it into operation and using it safely, thus guaranteeing the fundamental rights of citizens. The AI strategy was developed and presented as an agile strategy, with the aim of adopting a revised strategy in the first half of 2024. All departments within the AI Policy Forum, experts from research, science, business and stakeholders will be involved in the development process. See www.bmf.gv.at/presse/pressemeldungen/2023/september/ki-massnahmenpaket.html.

The federal government has declared its intention to actively participate at international level in order to strengthen and concretise the international legal framework (especially regarding human rights and international humanitarian law) for the digital space and to develop standards for the ethical use of AI in accordance with human rights.

The right to privacy is constitutionally protected in section 1, Data Protection Act. According to this law, everyone (both natural and legal persons) has the right to confidentiality of personal data concerning him or her, in particular with regard to respect for his or her private and family life, insofar as there is an interest worthy of protection. The existence of such an interest is only excluded if data is not accessible to a claim to secrecy due to its general availability or due to its lack of traceability to the person concerned. AI systems that are trained with personal related data or used for personal related decisions can infringe this right. 

The “right to informational self-determination” is recognised from this claim to secrecy under section 1, paragraph 1 of the Data Protection Act. Although the right to informational self-determination is not expressly affirmed, it can be derived from human dignity and general principles of the rule of law and is also recognised by the case law of the Austrian Constitutional Court (see Constitutional Court 27.6.2014, G 47/2012, G 59/2012, G 62/2012, G 70/2012, G 71/2012). The right to informational self-determination guarantees the freedom of the individual to decide when and within what limits personal facts of life are disclosed. If data or predictions from a person’s behaviour are made by means of or by creating links, it is precisely this right that seems to be violated, as the data subject has no influence on the information generated about him or her on customer affiliations, interests, behaviours or life models, which indisputably concern his/her private life.

Against this background, it can be particularly problematic that the data subject has no ability to influence the profile/output created about him or her by the AI system. However, such circumstances have not yet been the subject of a decision. 

Top

1.2. Human rights decisions and conventions

There are no decisions or conventions that refer to AI. 

Top

2 . Intellectual property

IP is relevant for AI systems under two perspectives: the protection of the components of an AI system (data, algorithm, model, software) on the one hand, and the protection of works that have been created by an AI system on the other hand. 

Neither the Austrian courts nor the legislator have referred to these topics so far. 

Top

2.1. Patents

Artificial intelligence regularly involves mathematical solutions that are realised in software, i.e. computer-implemented processes. In Austria, these are only accessible to patent protection to a limited extent, as computer programs are not patentable (section 1, paragraph 3, No 5 Patent Act). Rather, the program must have a “further technical effect” (see Austrian Supreme Court 25.8.2016, 4Ob94/16a). AI systems would therefore have to be a contribution to solving a concrete technical problem with technical means, such as the control of an autonomous vehicle, for a patent to be applied for.

Top

2.2. Copyright

The protection of investment in AI developments is fragmented. An algorithm, which is typically a mathematical rule, cannot be protected as a computer program according to section 40a, Austrian Copyright Act. In addition to this, the achieved results of an AI system (the trained model and the output data), which are the most valuable components of an AI system, cannot be protected under the Copyright Act due to the lack of human intervention.

According to the Austrian Supreme Court, if a computer converts millions of found image files into thumbnails completely independently and without further human intervention this would not suffice for a protectable work. This would only be the case with the help of a program that has been defined once and is thus no longer used merely as an aid to human activity, and moreover takes over the tasks of a human creator in its entirety (Austrian Supreme Court on 20 September 2011, 4 Ob 105/11m). Even if this decision did not refer to an AI system, the circumstances are comparable to such situations where the AI entity is able to fully autonomously create a work (without human intervention). Only a product of the human mind can be protected by copyright law, which is regularly not the case with artificially generated works, because the output lacks any reference to the intellectual process and creative will of a human being. If a human being subsequently gives the work personal traits with their specific commands and the further development of the artificially generated suggestions it might be eligible as a protected work. However, the personality of the creator must be expressed in such a way that the work is stamped with the hallmark of uniqueness and belonging to its creator (established case law since Austrian Supreme Court 7 March 1978; 4 Ob 317/78). For such artificially generated works (without human intervention in the described way) no regulations exist. 

If the artificially generated work was created with any human intervention, there are already national rules for analysing who has influenced the machine in such a creative manner that they would be treated as author (or co-author). This could be, for instance, the developer of the algorithm or the entity that introduced the algorithm into the AI system. Hence, it is a case-by-case decision depending on individual circumstances that will be determined by the national courts.

The Austrian Copyright Act does not provide further rules in the sense of related rights that protect the investments in the original algorithm and database for deep learning (similar to the database protection). 

Top

2.3. Trade secrets and confidentiality

According to section 26b of the Austrian Act against Unfair Competition a trade secret is information that is: 

  • secret because it is neither generally known nor readily accessible, either in its entirety or in the precise arrangement and composition of its components, to persons in the circles normally concerned with this type of information; 
  • is of commercial value because it is secret; and 
  • is subject to reasonable confidentiality measures appropriate to the circumstances by the person exercising legitimate control over that information.

This broad range of potentially protectable commercial information suggests that data more generally may also be covered. Simple raw data will regularly lack the requirements under section 26b(1) of the Unfair Competition Act, but this does not necessarily apply to the refined data in a particular compilation, which, as such, is not generally known or readily accessible and which has commercial value. These criteria could be applicable to recognised patterns and weights, which are created during the training of a model for an AI system. A prerequisite for protection would be that the individual variables of the calculation within the model remain secret, which, however, could possibly collide with the transparency requirement under the GDPR and in the proposal of the AI Act.

An Austrian court has not yet decided in a case with such circumstances. Hence, it is unclear whether components of an AI system could be protected as trade secrets. 

Top

2.4. Notable cases

There are no notable cases in Austria dealing with the protection of AI components.

Top

3 . Data

In Austria there is no comprehensive regulation for data in the context of AI systems: neither for the handling of raw data generated by machines, which is not personal data, nor for machine-generated data, which still has a personal reference, nor for its economic use and tradability. The issue is therefore largely regulated by contractual agreements.

Top

3.1. Domestic data law treatment

According to section 285, Austrian Civil Law Act, everything that is distinct from a person and serves for the use of people is called a thing in the legal sense. The prerequisite for qualification as a thing is that it is controllable and exists in a limited, available quantity; otherwise, it would be common property. According to this definition, data is consistently understood as a legal thing, but there is no agreement as to whether it is generally to be treated as a tangible or intangible thing. The predominant Austrian commentary literature assumes that data that is not stored on a data carrier should be treated as an intangible thing.

However, no authority or court has so far dealt decisively with the ownership of data. Although data are supposed to be marketable largely like other intangible goods and can therefore be the subject of a purchase, it is largely unclear which provisions on purchase as a title for acquisition of ownership actually apply. 

Top

3.2. General Data Protection Regulation

Provided that AI systems process personal related data (be it as input or as output), the European GDPR and the national provisions in the Austrian Data Protection Act apply. The Austrian data protection authority (DSB) has not yet commented on questions of the permissibility of data processing and the rights of data subjects in AI in a decision; however, on 29 April 2024 a complaint against OpenAI was filed by the Austrian privacy activist Max Schrems and his non-governmental organisation NYOB (see, e.g. www.euractiv.com/section/digital/news/schrems-ngo-files-gdpr-complaint-against-openai-over-ai-hallucinations), which centred on the issue that ChatGPT does not ensure accurate processing of data (Article 5, paragraph 1(d), GDPR), criticising the fact that generating answers to user requests is achieved by a process of mathematical correlation, i.e. by predicting the most statistically likely words to appear in a sequence. 

In April 2024 the DSB published FAQs (see www.dsb.gv.at/download-links/FAQ-zum-Thema-KI-und-Datenschutz.html) in which it only generally emphasises the validity of the GDPR when using AI systems and describes the processing principles, but does not deal with any specific questions of interpretation (e.g. in which training phase data processing takes place or how the deletion of data from a training corpus shall be implemented).

The DSB has also recently, with notification of 28 May 2024 (D036.500/2024 2024-0.380.937), commented on the relationship between the GDPR and the AI Act. In summary the opinion of the DSB is as follows: 

  • The GDPR remains applicable even after the AI Act comes into force if personal data is processed.
  • If personal data is processed in the context of the use of AI systems, this is only permissible if the data protection principles pursuant to Article 5, GDPR, are complied with. In particular, a justification pursuant to Article 6, paragraph 1 (and, if applicable, Article 9, paragraph 2) GDPR must exist.
  • If there is a case of application of Article 22, GDPR (automated decisions in individual cases including profiling), the requirements of this provision shall apply.
  • In accordance with Article 5, paragraph 2, GDPR, the controller bears the burden of proof that the processing is lawful.

Furthermore, in connection with AI systems the obligation to conduct a data protection impact assessment was re-emphasised by the DSB. Article 35(4) of the GDPR provides that the national supervisory authority shall draw up and publish a list of processing operations for which a data protection impact assessment must be carried out (the so-called blacklist). The DSB has made use of this provision, and implicitly refers to AI technologies as being subject to it.

There are three factors that could be applicable to AI systems according to section 2, paragraph 1 of this blacklist:

  1. Data processing which involves the evaluation or classification of natural persons, including profiling and forecasting, for purposes relating to the performance at work, economic situation, health, personal preferences and interests, reliability or conduct, location or change of location of the person and which are based solely on automated processing and are likely to produce adverse legal, physical or financial effects.
  2. Processing of data intended to evaluate the behaviour and other personal aspects of natural persons, which may be used by third parties to produce automated decision-making, which produces legal effects concerning the persons evaluated or similarly significantly affects them.
  3. Processing of data using or applying new or novel technologies or organisational solutions which make it more difficult to assess the impact on data subjects and the societal consequences, in particular through the use of artificial intelligence and the processing of biometric data, provided that the processing does not concern the mere real-time rendering of facial images.

Apart from the fact that the provisions on profiling (Article 4, No 4, GDPR) and automated individual decision-making (Article 22, GDPR) are of particular relevance for AI, as almost all AI systems are designed to assess any human characteristics and because machine learning works by definition with automated decisions, the right to access (Article 15, GDPR) is also of relevance. 

According to a decision of the Austrian Data Protection Authority (as of 8.9.2020, DSB 2020-0.436.002), the scope of the right to access also includes the right to be informed about all relevant elements of an automated decision, which is intended to ensure the comprehensibility and correctness or up-to-dateness of the input variables in the case of the data subject. The information, however, does not include the logic of the algorithm, its source code, the compilation code or the complete documentation. It is apparent that in making its decision, the data protection authority overlooked the fact that certain calculation methodologies are almost certainly subject to Know-How protection. 

Top

3.3. Open data and data sharing

Directive (EU) 2019/1024 on open data and the re-use of public sector information (recast) [2019] OJ L172/56 is applicable from 17 July 2021. It replaces Directive 2003/98/EC on the re-use of public sector information, as amended by Directive 2013/37/EU. The Austrian Act on the further usage of public institutions’ data (Informationsweiterverwendungsgesetz, IWG), Federal Law Gazette I No 116/2022 as of 27 July 2022, is intended to serve the legal implementation of Directive (EU) 2019/1024 by the Federal Government. The Directive introduces some innovations and is not designed as an amendment, but as a new version. Parallel to this, legislative measures by the federal states are required.

Section 6 IWG requires a written application for the re-use of documents of public bodies to be submitted to the public body in whose possession the requested document is located. The application must clearly state the content, scope and manner of re-use of the requested documents. With regard to personal data in such documents, reference is only made to the applicability of the GDPR. However, the draft does not provide for the criteria to be used for balancing the interests of further use and those of the data subjects for confidentiality.

Critical in the Open Data concept is the protection of the secrecy interests of natural persons. Insofar as personal data may be made publicly accessible in a legally permissible manner, a gateway is opened for the violation of the right to informational self-determination, as the data subject loses control over the further use of his or her data. Considering that sufficient technical and organisational protective measures have not been prescribed by the legislator, the protection of privacy interests depends on the circumstances of each individual case. 

Top

3.4. Biometric data: voice data and facial recognition data

According to section 2, paragraph 1 of the blacklist (see Section 3.2 above), the processing of biometric data is also subject to a privacy impact assessment. 

Top

4 . Bias and discrimination

Top

4.1. Domestic anti-discrimination and equality legislation treatment

The principles of equal treatment in Austria are primarily laid down in the following laws:

  • Federal Equal Treatment Act (Gleichbehandlungsgesetz (GlBG)) for the private sector and in other areas.
  • Federal Act on Equal Treatment in the Federal Sector (Bundes-Gleichbehandlungsgesetz (B-GlBG)) for employment relationships in the federal service.

The GIBG protects against discrimination in employment on the grounds of gender (in particular with reference to marital status or whether someone has children), ethnicity, religion or belief, age, sexual orientation. Discrimination on these grounds is prohibited in the establishment of the employment relationship, in the determination of remuneration, in the granting of voluntary social benefits, in training and retraining measures, in career advancement, in particular promotions, in other conditions of employment, and in the termination of employment. 

The problem in the context of AI systems is that the discrimination often occurs indirectly. Hence, the unequal treatment of a person is not obviously due to one of the above-mentioned grounds of discrimination, but due to the apparently neutral regulation, which may have disadvantageous effects. For example, the model is supposed to weed out all part-time employees, which in the output leads to only men being included because from experience the majority of the part-time positions are held by women. This is indirect discrimination. The Austrian courts have not dealt with this so far.

Top

5 . Cybersecurity and resilience

Top

5.1. Domestic technology infrastructure requirements

As part of the federal government’s strategy for artificial intelligence (Artificial Intelligence Mission Austria 2030, Vienna 2021; see below under Section 7.2), it was generally recognised that, on the one hand, AI systems must be protected against attacks on IT systems and, on the other hand, AI systems must be used in a supportive manner to improve IT security. The Austrian government has therefore committed to supporting the development of models and methods for the security of AI systems. In addition, standards for the resilience of AI systems against attacks and security breaches and the reliability and reproducibility of AI results are to be promoted, and concepts for auditing AI systems are to be developed with experts at national and European level.

In Austria, a number of open research questions relating to security and security testing have been identified in the use of AI-based systems. Traditional approaches in the field of security testing, especially penetration testing, and strategies and techniques such as fuzzy testing, show problems when applied to modern algorithms that require further research. In order to address these issues early on in the procurement process, a guide for companies to purchase the most secure AI possible was developed by the St. Pölten University of Applied Sciences and made available on the website of the Austrian Federal Chancellery (www.onlinesicherheit.gv.at/Services/News/Beschaffung-sicherer-AI-Ein-praktischer-Leitfaden.html), which makes it possible to determine: (i) whether a product is based on fundamental security considerations; (ii) how essential issues such as control over data and models, patching and the like are handled; and (iii) whether a suitable contact person is available to answer such questions sensibly and correctly. Despite these strategies and initial approaches in Austria to create safe and robust AI systems, there are currently no official recommendations or efforts to enshrine this in law.

Top

6 . Trade, anti-trust and competition

Top

6.1. AI related anti-competitive behaviour

So far, in Austrian antitrust practice, no decision of the national competition authorities and courts addresses AI-related market abuses.

In its thesis paper “Digitalisation and Competition Law” (Digitalisierung und Wettbewerbsrecht), the Austrian Federal Competition Authority (FCA) suggests the use of the reversal of the burden of proof with regard to cases of abuses which are typical for the digital economy. In the FCA’s view, this appears justified where there is a prima facie case of abusive conduct or where official investigative actions quickly come up against natural or technical limits. It would then be up to the (dominant) companies to explain, with recourse to the data available to them, why a certain practice or conduct does not have any anticompetitive abusive effects. Following that concept, with regard to AI, undertakings concerned, for example, would have to prove that their algorithms do not abuse dominant position on the relevant market.

In its case report concerning its investigation on Amazon, the FCA, confronted with an alleged abuse of a dominant digital global player, first concluded that Amazon had market power (although the relevant market was not ultimately defined). In defending itself from complaints from retailers, Amazon argued that AI was needed in running its website. Following the Amazon investigation, the high number of fraud cases using sophisticated technologies made it necessary to take action against possible fraud cases with automated programmes and self-learning algorithms. The FCA did not comment on this in its report (but also did not reject this argument).

Top

6.2. Domestic regulation

Again, no domestic competition rules or regulations about AI exist in Austria. With the amendments of the 2021 reform of the Austrian Cartel Act, new criteria in assessing market dominance were introduced — all of which address the digital economy. For example, the amended Cartel Act now refers to, inter alia, access to relevant data and the benefits of network effects as an (exemplary) criteria for market dominance.

Top

7 . Domestic legislative developments

Top

7.1. Proposed and/or enacted AI legislation

Austria has already begun preparations for the AI Act. A national body has been implemented to coordinate and monitor the rules of the AI Act, namely the AI Service Center, which was set up within the Austrian Regulatory Authority for Broadcasting and Telecommunications (RTR). A corresponding application was discussed in the Constitutional Committee of Parliament on 22 January 2024 and passed in the National Council on 31 January 2024.

The AI Service Center is to be transformed into an AI authority under the EU Artificial Intelligence Act in a further expansion step. The aim is to ensure that AI systems placed on the EU market and used in the Union are safe and uphold the fundamental rights and values of the EU.

The same application submitted to the National Council on 22 January 2024 also included the establishment of an “Advisory Board for Artificial Intelligence”, also based at RTR. The 11-member advisory board was constituted on 28 February 2024 and will advise policymakers on technical, social and ethical issues relating to artificial intelligence.

Top

7.2. Proposed and/or implemented Government strategy

Austrian federal ministers presented the federal government’s strategy for artificial intelligence (shortly referred to as AIM AT 2030) with its goals and fields of action in September 2021. The question dealt with is whether the current legal framework for product liability, product safety, data protection or consumer protection is sufficient for products with embedded AI or whether new regulations are needed, especially with regard to learning AI systems.

The Austrian strategy follows the European draft of the AI law and intends to use AI regulatory sandboxes to analyse how new innovative technologies function in the existing regulatory environment and thereby also offers the legislator the opportunity to gain important insights into whether and where there is a need for regulation. 

In the first half of 2024 a revised strategy shall be adopted (see www.bmf.gv.at/presse/pressemeldungen/2023/september/ki-massnahmenpaket.html).However, no such revisions and updates have been published as of July 2024.

Top

8 . Frequently asked questions

8.1. How is the liability regulated between the different parties in AI systems (e.g. software developer; data analysts and user)? 

There are no specific rules in Austria. The regulations of fault liability for the developer within a contractual relationship and those of strict liability according to the Product Liability Act in the environment of damages caused by products in which AI systems are implemented apply.

8.2. Can we license data? Are there any specific rules? 

Austria has not yet dealt with the question about licensing of data. However, within the framework of the private autonomy prevailing in Austria, each party is free to make its property (and therefore, also data, see above, Section 3) available to third parties and to demand payment for it.

8.3. Can personal related data be used for training of AI? 

See above, Section 3.

 

With many thanks to Gerhard Fussenegger, a partner at bpv Hügel Rechtsanwälte, for authoring Section 5. 

EXPERT ANALYSIS

Chapters

Australia

Kit Lee
Philip Catania

Belgium

Benjamin Docquir

Canada

Charles Morgan
Daniel Glover
Dominic Thérien
Erin Keogh
Francis Langlois
Jonathan Adessky
Karine Joizil
David Tait
Eugen Miscoi
Kendra Levasseur

China

Lewis Chen
Xinyao Zhao

European Union

Benjamin Docquir

Germany

Alexander Tribess

Iceland

Lára Herborg Ólafsdóttir

Ireland

Barry Scannell
David Cullen
Jordie Sattar
Leo Moore

Italy

Enrico Fabrizi
Federico Ferrara
Gianluigi Marino

Netherlands

Coen Barneveld Binkhuysen
Joanne Zaaijer

Spain

Rafael García del Poyo

Switzerland

Martina Arioli

Turkey

Begüm Alara Şahinkaya
Burak Özdağıstanli
Göksu Tuğrul
Hatice Ekici Tağa
Sümeyye Uçar

United Kingdom

Amy Moylett
David Cubitt
Joachim Piotrowski
John Buyers
Katherine Kirrage
Tamara Quinn
Tom Sharpe
Emily Tombs

United States

David V. Sanker, Ph.D

Powered by SimSage

Jobs from Nicholas Scott

3-6 PQE Corporate M&A Associate

Job location: London

Projects/Energy Associate

Job location: London

Popular Articles

Latest Articles

Quillon Law bolsters fraud practice with Covington & Burling partner hire

11h

Greenberg Traurig adds senior arbitration partner in Dubai from Clyde & Co

17h

Top law firms line up to advise on Richemont’s sale of Yoox Net-a-Porter to Mytheresa

19h

Kirkland recruits FTC veteran in Washington DC

20h

Pfizer secures patent win at UK High Court in RSV vaccine race

21h