Sign up for our free daily newsletter
YOUR PRIVACY - PLEASE READ CAREFULLY DATA PROTECTION STATEMENT
Below we explain how we will communicate with you. We set out how we use your data in our Privacy Policy.
Global City Media, and its associated brands will use the lawful basis of legitimate interests to use
the
contact details you have supplied to contact you regarding our publications, events, training,
reader
research, and other relevant information. We will always give you the option to opt out of our
marketing.
By clicking submit, you confirm that you understand and accept the Terms & Conditions and Privacy Policy
Human touch: generative AI and the concept of human authorship
As regulators around the world turn their attention to generative AI, the entertainment and media industry will most likely prove to be one of the key battlegrounds on which the boundaries of creativity, content generation and enforcement will meet.
From the impact of large language model (LLMs) text-generation tools that can assist scriptwriters to image-creating platforms to generate storyboards and scenes, we are a long way from Alfred Hitchcock’s use of 2D animated patterns in 1958’s Vertigo (an early example of computer-generated imaging in film).
What about music and video? Google’s MusicLM can generate music from text descriptions and OpenAI’s Sora can create realistic and imaginative scenes from simple text instructions.
The use of generative AI in creative content generation provides many outlets and channels for the creative process, but there are some factions of the industry who fear that the risks will outweigh the benefits. Not least of these concerns is the ownership of such works and how we define ‘authorship’.
Broadly, international copyright law and EU legislation affirm the legal position that an author must, generally, be human. After all, EU case law shows us that key elements of authorship include the ability to exercise creative freedom and express creative acts with ‘authorial intent’, which is the author’s own ‘intellectual creation’ and reflecting the author’s personality – clearly these are uniquely human capabilities… or are they?
In the US, recent decisions by the US Copyright Office Review Board and the courts have reflected a position that rejects copyright protection for AI-generated works, noting that human authorship is a “bedrock requirement of copyright”.
However, as generative AI tools become more widely used, there will be inevitable debates about the human contribution to AI-generated content, including how we assess and define these contributions, and where human intelligence ends and artificial intelligence begins. Legally, copyright protection and related rights have been – until now – the ace up the sleeve of creative professions. But with more content being generated and disseminated online and – potentially – less protection on offer for copyright (and other rights) holders, the creative industry might find itself being dealt a new hand.
What’s the story? Regulating market power
One just has to look at the recent swathe of regulations, including the EU’s Digital Services Act (DSA) and Digital Markets Act (DMA), and the UK’s Digital Markets, Competition and Consumers Act (DMCC) and Online Safety Act (OSA), to recognise that regulatory scrutiny of activities that touch on any part of the digital experience of consumers is increasing the responsibilities placed on businesses with online activities.
At the same time, this enhanced scrutiny seems to be expanding the opportunities for consumers as well as challengers to incumbents. Some recent high-profile examples concern Apple’s €1.8bn fine from the European Commission for blocking app developers from pointing their users towards pathways to cheaper transaction options outside of the App Store.
- Epic Games recently credited the DMA with its ability to now open its own Epic Games Store for gaming apps for iPhones in the EU, although it is challenging Apple’s ‘blocking’ of users elsewhere, including the UK, where Epic is hopeful the new UK digital markets regime will effectively combat such practices.
- Spotify affirmed that it would opt into Apple’s new terms for music-streaming apps, which would allow the company to inform its users with iPhones of subscription prices (for example, upgrades), thereby enabling potential conversions of free users to subscribers.
Meanwhile, in the UK, a storm has been brewing in the wake of Oasis’s announcement of a reunion tour. Following several complaints by consumers and high-profile consumer groups such as Which? about Ticketmaster’s use of ‘dynamic pricing’, the CMA launched a consumer law investigation into the company.
Dynamic pricing is where a business adjusts its prices according to market conditions such as significant demand and has long been a practice associated with live music and sporting events. Although the practice is not unlawful by default, it has raised concerns in recent years and may – in certain circumstances – result in a breach of consumer law or competition law.
In particular, the investigation will consider whether Ticketmaster has engaged in unfair commercial practices under the current rules (the Consumer Protection from Unfair Trading Regulations 2008). As part of its information gathering, the CMA is inviting fans to submit evidence of their ticket purchasing experiences and may even include an approach to the band’s management. The very public consumer backlash has now initiated regulatory interest into Ticketmaster from the UK, the EU and Ireland and led to the Sale of Tickets (Sporting and Cultural Events) Bill being put forward in the House of Commons.
Key takeaways? As the CMA gains enhanced consumer law powers, it will have the flexibility to choose the most appropriate enforcement tool at its disposal. Businesses will do well to consider their online practices, mitigate consumer complaints and pay attention to the regulator’s direction of travel as it advances its enforcement priorities.
Merge overkill: media mergers and the Hollywood trend towards consolidation
In recent years, the entertainment and media industry has seen high-value, ground-breaking and industry-defining deals lead to consolidation. While these ‘bet the farm’ deals have reshaped the industry landscape, they have not escaped regulators’ attention. From streaming to gaming and music labels to media networks, the need for new (content, brands, distribution pipelines and revenue streams) is driving the urge to merge.
One of the latest is the Skydance Media and Paramount Global deal – valued at $8bn – to create a new technology-media hybrid company to meet the growing demands of shifts in consumer viewing and media consumption.
Upon announcement of the deal, speculation was already rife that it could face antitrust scrutiny from a vigilant Department of Justice (DOJ) in the US concerned by what some antitrust academics have posited as a possible “trend toward concentration” in Hollywood.
If that were to happen, it would not be the first time that antitrust scrutiny threatened to completely derail a deal.
Disney and Reliance’s $8.5bn media merger was another deal that drew antitrust scrutiny after the Competition Commission of India raised concerns about its hold on most of the cricket rights for TV and streaming in India.
In 2023, the UK Competition and Markets Authority (CMA) famously blocked Microsoft’s $68.7bn deal to buy Activision, citing concerns that the deal would distort competition on the nascent cloud gaming market and ultimately lead to reduced innovation and choice for the UK’s 45 million gamers.
These deals along with others such as Disney+/Hulu and UFC-WWE to name a few, have attempted to redefine viewer/consumer experiences and interactive entertainment. But questions remain about bundle deals, price hikes and reduced competition.
Lessons learned? Clearance in one major jurisdiction doesn’t necessarily mean clearance in another, and certain regulators’ (such as the CMA) attentiveness to novel theories of harm as well as innovative deals with a digital component mean less predictable outcomes for deal parties.
It’s all about… consent: media, commercialising content and training data
Earlier this year, the Financial Times became the first major UK-based news publisher to enter a licensing deal with OpenAI – which includes providing content and training data to the ChatGPT owner – following similar agreements made by Axel Springer, Le Monde and Associated Press. Since then, other media publishers have also signed up, including News Corp, Conde Nast and Hearst.
While even the CEOs of some of these publishers have acknowledged certain ethical concerns inherent in these deals, there are certainly legal concerns around commercialising content if it somehow captures someone else’s data and intellectual property in the process (i.e., consent, attribution, copyright), highlighting tensions in the relationship between AI development and media platforms.
Recently, this tension has been tested by social media platforms, which are becoming an increasing source of news. As a result, data protection authorities are ramping up efforts to meet the rising risks.
In Ireland, the Data Protection Commission (DPC) launched urgent court proceedings against X to stop the illegal processing of the personal data of more than 60 million users in the EU/EEA to train its AI technologies (Grok). Noyb, the European Center for Digital Rights founded by Max Schrems, subsequently lodged GDPR complaints in Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Spain and Poland.
The key concerns?
- Can ‘legitimate interest’ provide an alternative to consent which had not been requested?
- What about the ‘right to be forgotten’ once the data in question has been ingested into the AI system?
- Can providers of AI systems correct inaccurate personal data once it’s been used to train the systems?
Similar situations involving other Big Tech companies have contributed to data protection authorities seeking clarity around these issues and the European Data Protection Board has subsequently been asked by the Irish DPC to issue an opinion on AI training and the application of EU data protection rules.
In the UK, the Information Commissioner’s Office (ICO) has issued a consultation to understand the allocation of accountability for data protection and compliance along the supply chain for generative AI services.
Questions abound about how organisations – credible or otherwise – will look to harness the opportunities (and the risks) that generative AI brings when it comes to users, followers and viewers. In light of this, media businesses will want to carry out robust internal data protection impact assessments and be vigilant of cross-border enforcement risk.
Safe as houses: age, online safety and content creation
From the French authorities arrest of Telegram’s CEO Pavel Durov to concerns about the spread of misinformation via social media platforms contributing to riots across UK cities, online safety has been at the centre of discussions about how we regulate and moderate online content.
The far-reaching and expected impact of the UK’s Online Safety Act touches on all aspects of content production and creative industries, with potentially significant implications for TV and film companies, commissioning broadcasters, marketing agencies as well as gaming platforms and tech/social media platforms.
- The gaming industry is likely to be particularly impacted by the OSA as many gaming studios offer products and services that enable user-generated content alongside multiplayer games, social games and online chat forums.
- Content producers will want to ensure that their products (whether productions or certain promotional clips) do not contain regulated content. While the OSA is clearly focused on user-to-user services and content, these are defined quite broadly.
It remains to be seen how the OSA will function in practice, but depending on how certain types of content are shared or promoted, clips containing what may be identified as illegal or harmful content – even in the context of a real-life narrative, such as bullying or self-harm – could potentially be caught by platforms’ filters for removing flagged content under measures imposed by the OSA.
Significantly, Ofcom has been given expansive enforcement powers under the OSA with the ability to issue fines of up to 10% of annual global turnover or £18m (whichever is greater).
Recent investigations have affirmed that video-sharing platforms and age-verification measures continue to be on the regulator’s radar, even as businesses wait for Ofcom to implement the OSA.
In the EU, similar obligations apply under the DSA, an EU-wide regulation designed to address illegal online content and protect fundamental rights and freedoms of users of online platforms. Notably, the DSA became fully operational on 17 February 2024, with the EU Commission already launching related investigations.
Although still early days, the DSA may prove to be challenging for media companies to navigate, depending on their business models and whether any of their activities are caught by the restrictions and/or sit within the relevant categories.
- Some notable issues for entertainment and media businesses to consider include how illegal content is defined and content moderation requirements, some of which are quite advanced (from informing users about restrictions imposed on them due to illegal content or infringement of the terms and conditions to cooperating with ‘trusted flaggers’).
In the case of a breach, the responsible regulators can impose fines of up to 6% of a company’s worldwide turnover.
The stakes are high, therefore, content producers need to be mindful of where their content is being placed and how it is being shared.
Reign Lee is the head of strategy at Van Bael & Bellis in London. She has significant experience working with brands, funds, online marketplaces as well as creative industries (particularly music), and specialises in UK/EU digital regime counselling and compliance.
Thibaut D’hulst is the head of the data protection and intellectual property practices at Van Bael & Bellis in Brussels. He regularly advises clients on all aspects of intellectual property law, including strategies to protect trademarks, databases and other intellectual property, as well as litigation and technology projects related to compliance with intellectual property, data protection and/or pharmaceutical laws.
Ossama M’Rini is an associate in the Van Bael & Bellis commercial team, based in Brussels. He advises domestic and international clients on a wide range of commercial law issues, with a focus on IT/IP and data protection. He is a member of the firm’s AI taskforce, where he examines issues at the intersection of law and technology.
Email your news and story ideas to: [email protected]