Who’s walking the runway: fashion models or AI?

Foley’s Jeffrey Greene looks at how AI is revolutionising the fashion and beauty industries and issues around bias, disclosure, right of publicity and copyright

Approximately 73% of fashion executives plan to make generative AI a priority this year to assist in the creative process Shutterstock

The modelling industry is buzzing with questions about whether artificial intelligence (AI) will replace human fashion models on the runway. 

For example, what would you do if you were a fashion model who walked a runway, but later found out your face was replaced with an AI-generated face? This is what happened to Shereen Wu.

The recent incident involving Taiwanese-American runway model Shereen Wu sparked conversations about the ethical use of AI in the fashion industry. Wu alleged that a well-known fashion designer replaced her face during a Los Angeles fashion show with an AI-generated one to make her appear more Caucasian. Specifically, Wu alleged that fashion designer and Project Runway alum Michael Costello posted an Instagram photo after the fashion show in which Shereen’s face was edited out and replaced with an AI-generated one – from Asian to White. 

In response, Shereen posted a TikTok video to share her thoughts and concerns, questioning Costello’s motives; the video has generated more than one million views. Due to these allegations, Costello received backlash and negative headlines. He disputed Wu’s claims, arguing he reached out to her on numerous occasions with no success, and that neither he nor his editing team edited any of the videos/images. This dispute has raised both misappropriation and moral dilemmas.

In fact, AI use in e-commerce channels and commercial modelling is on the rise. Approximately 73% of fashion executives expect to make generative AI a priority this year to assist in the creative process, such as design and product development. However, many businesses remain hesitant. Certain companies like Levi Strauss have begun using generative AI to display more diversity on e-commerce channels. 

In March 2023, Levi Strauss announced its partnership with digital fashion studio Lalaland.ai to generate diverse AI models. In a press release, the company stated this partnership would assist in “supplementing models” and “creating a more personal and inclusive shopping experience” for consumers. After its announcement, Levi Strauss received backlash from the fashion industry, due in part to the idea that the use of AI would replace humans and blur the line between “true” representation versus “manufactured”.

Additionally, critics argued Levi Strauss was not authentically addressing the issue of representation in the fashion industry or lack thereof. Rather, the company was taking a shortcut to save on costs – avoiding the need to hire human models and securing stylists or makeup artists. In response to the backlash, Levi Strauss released a clarifying statement that its use of AI was not its entire approach toward diversity, equity and inclusion (DEI) goals. Despite its statement, models and consumers expressed concern with how companies are using AI-generated models to “increase diversity”, as some say this is not an organic way to tackle the issue. 

When Levi Strauss made its initial comments, models of color expressed concerns that they already are not booking as many casting or booking opportunities as their counterparts. Commentators assert that the AI’s use is essentially perpetuating inequality because it is only creating an illusion of diversity. In reality, the fashion industry as a whole has been dealing with the issue of diversity for many years. 

Following the social unrest in the summer of 2020, fashion companies and brands developed initiatives to improve their DEI efforts, including more diverse hiring practices and marketing/advertising campaigns. However, there is skepticism that nothing has changed. This scepticism, coupled with the growing unease of AI replacing human models altogether, has increased concern for models of color who are not comfortable with fashion companies and brands that claim to use AI to increase diversity. 

Using AI fashion models can also reduce costs associated with photography shoots and fittings, and these savings are proving attractive to fashion brands, companies and agencies. With AI-generated models, there is no need to pay location fees for photoshoots, time spent finding a location, model fees for trying on clothes, or fees for photographers at all. However, industry experts believe there should be a balance between brands and companies using generative AI to promote efficiency and reduce costs while also enhancing consumer satisfaction, making AI more inclusive for all and tackling the issue of diversity. 

As an example, The Diigitals, an AI and 3D modelling company, collaborated with Down Syndrome International (DSI) and creative agency Forsman & Bodenfors, to create Kami, an AI-generated virtual influencer who displays physical features associated with Down syndrome. According to Diigitals, “the purpose of creating Kami was to celebrate and promote diversity within the metaverse, showcasing the incredible talents and capabilities of individuals with Down syndrome”. 

Kami won three awards at the Cannes Lions Festival of Creativity, attributed to the creativity and initiative behind the collaboration. Various young women with Down syndrome from different countries volunteered to assist in the creation of Kami, allowing Diigitals to create an authentic AI-generated individual with Down syndrome. For many, Kami has been seen as empowering individuals with Down syndrome, allowing them to feel acknowledged in a world that often casts these individuals as outliers. Kami’s mission is to invite fashion brands and communities to change the digital space, making it a more inclusive and amiable place for people living with Down syndrome. 

Addressing bias

Given criticism over the past decades that the modelling industry does not accurately represent and include members from underrepresented groups, the use of AI has raised additional concerns, particularly algorithmic bias. Algorithmic bias is the “systematic and replicable errors in computer systems that lead to unequal[ity] and discrimination based on legally protected characteristics, such as race and gender”. 

Dr. Karima Ginena, a social scientist, conducted a recent test and found that when he prompted certain AI generators for images, “the programs almost always delivered a picture of a white person”. AI algorithms are trained on the information provided to them. If information is not entered in or is excluded, the result may not be representative. Subsequently, underrepresented models that have historically been excluded from castings, runway shows or photoshoots would continue to be omitted. Broderick Turner, a Virginia Tech marketing professor, recommends there to not only be greater representation in AI data, but also in the individuals who are coding the data. This would allow for greater data input to be closely representative of the current demographic. 

AI in the beauty industry 

The beauty industry has also seen an increase in the use of generative AI. Currently, various beauty brands and companies use it for skin diagnostics by way of Sephora’s Color IQ matching technology, product effect simulations via Haut.AI’s Skin GPT product, and interactive skin consultations from companies such as Galderma. 

With AI, consumers can try on lipstick colours, match foundation shades, experience various eyeshadow colours, and discover a go-to mascara, providing custom product recommendations and tailored shopping experiences. In turn, these AI-powered apps and services will continue to learn and gain insights into consumer preferences. Generative AI can recreate a makeup look or create a similar look based on how it decodes that look. It can also provide recommendations based on consumer input, utilising past augmented reality virtual try-ons combined with user-uploaded images. 

Even Lisa Eldridge, a world-renowned professional makeup artist and beauty expert, released a video showing how AI designed a makeup look for her. Although ChatGPT generated prompts for Lisa to follow, based on the information it was able to obtain about her years of experience as a makeup artist, it missed specific techniques that she uses. Eldridge ended her video saying that she honestly does not know if AI will completely replace makeup artists, but her tutorial is an example of how creatives can use AI as a tool to assist in the beauty and makeup industry.

Based on a recent survey, about 29.6% of makeup artists are concerned AI will result in job displacement and loss of personalisation. Because generative AI can replicate and suggest makeup looks at much faster rates than human capabilities, makeup artists are hesitant about the continued progression of AI in this space. 

A 2023 Goldman Sachs report predicted that AI could disrupt approximately 300 million full-time jobs by reducing labour costs and increasing automation. Included in that number are the jobs of various kinds of artists, like makeup artists. If AI can generate makeup looks and provide the tools to do so, are makeup artists needed? What about the actual application? This is where makeup artists are still needed, as AI does not currently “apply” makeup. AI is not inherently creative and is limited in this way, as it only builds on data it is provided – a limiting factor, as previously noted above via Lisa Eldridge’s tutorial.

Model disclosure

Although not a legal requirement, disclosing which models are AI-generated on e-commerce channels is one way for brands and companies to be transparent with consumers and models. On 2 February 2024, the European Union unanimously approved the EU Artificial Intelligence Act (EU AI Act or Act) requiring, among other things, transparency in disclosing AI-generated content and its original content. Disclosure requirements apply to the use of AI and the people using it. Although the EU is the first to require AI disclosure, other markets may follow this trend. 

The EU AI Act is seen as revolutionary, as it is the first-ever legal framework addressing the risks of AI systems. This regulatory framework categorises certain levels of risk for AI systems, placing them into four categories: (1) unacceptable risk; (2) high risk; (3) limited risk; and (4) minimal risk. For instance, providers of high-risk AI systems will have to undergo an assessment, complying with AI-specific requirements including registration in an EU database. Further, the systems will bear a formal European Conformity (CE) marking and be placed in the marketplace, with any substantial changes that occur during the lifetime of the system requiring a new assessment each time. Penalties will be imposed for noncompliant AI systems. 

Although the EU is the first to require these measures, other markets may soon follow this trend. The act could affect the United States, serving as a blueprint (or at a minimum, guidance) for US federal agencies to consider in assessing AI system deployments that may impact overall societal livelihood, i.e., hiring practices, healthcare or transportation. Federal agencies in the United States have noted that they seek to police AI systems, ensure responsible innovation, and provide enforcement efforts against discrimination and bias. Although the EU AI Act is the first to provide a comprehensive approach to regulating AI systems, other global laws are surely to come along to address AI risks.

Right of publicity

Under most laws, fashion models are considered independent contractors and therefore unable to unionise to combat name, image, and likeness (NIL) issues, unlike actors or performers. Normally, models sign model releases, handing over certain rights and conditions about their individual NIL for commercial use. With the increased use of AI and how it uses data received to create an output, AI could essentially generate the faces of many models without their consent. 

However, the Model Alliance’s Fashion Workers Act seeks to address this issue by prohibiting model management companies from, among other things, “creating, altering, or manipulating a model’s digital replica using artificial intelligence without clear, conspicuous[,] and separate written consent from the model”. This legislation appears to be in the right direction towards providing models with labour protections, in addition to protection against AI.

Copyright considerations

Copyright concerns are rising as fashion designers and creatives experiment with using AI to generate designs and details, such as fabrics, colours and patterns. Yet under US law, copyright only protects non-functional creative elements. In 2023, the US Copyright Office noted there are circumstances in which works containing AI-generated material will contain “sufficient human authorship” to qualify for copyright protection. That said, it is unclear whether AI-generated fashion designs are protected.

The Compendium of US Copyright Office Practices provides guidance on who is deemed an “author”: “[t]he U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being,” and “works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author” are not registrable. Although the compendium does not fully account for current and future uses of AI-generated fashion designs/patterns, there have been various cases involving AI-generated art and copyright. 

For example, in 2023 the US Copyright Office ruled an award-winning AI-generated piece of art, Théâtre D’opéra Spatial, did not qualify for copyright protection. Matthew Allen, the artist behind the AI-generated art, used Midjourney, a generative AI program, to create his work of art. Ultimately, the Copyright Office concluded copyright protections are not extended to AI but parts of the work that Allen modified with Adobe amounted to an original work. However, the other parts of the art that were generated solely by AI could not be copyrighted. 

Separately, in Thaler v. Perlmutter, the District Court for the District of Columbia ruled Thaler’s AI-generated work of visual art was not copyrightable based on the Copyright Act’s plain language that for an original work of authorship to be copyrightable, the author must be a human. Thaler created a piece of visual art via DABUS (an AI system Thaler created) without any human input. Earlier this year, Thaler appealed to the US Court of Appeals for the District of Columbia Circuit, arguing creative works generated by AI should be afforded copyright protection. There has yet to be a final ruling. 

Meanwhile, there have been lawsuits focused on AI art generators, original copyright holders and fair use. For example, in Getty Images v. Stability AI, Getty Images alleged that Stability AI, an AI art generator, copied its images to train its AI model without permission. There has yet to be a ruling; however, the court will surely analyse and consider the US Fair Use Doctrine to determine whether use on the part of Stability AI was fair. 

What does all this mean?

While frequently in the headlines today, AI is not a new concept. It’s been around since the 1950s and AI research continues to evolve. 

Generative AI tools such as DALL-E 2, Midjourney and Stable Diffusion are increasingly being used to produce static 2-D images, with new applications like Runway capable of video output. These advancements promise various benefits to the fashion industry, including reduced costs, optimised inventory management and pricing strategy, improved analysis of customer preferences, and enhanced design and creativity. 

Additionally, there are various proposed use cases involving generative AI across the fashion value chain, such as in the fields of merchandising and product, supply chain and logistics, marketing, digital commerce and consumer experience, store operations, and organisation and support functions.

With all the concerns and questions about AI and its effects on the fashion and beauty industries, fashion leaders, companies, agencies and executives should ensure they are maximising and balancing its perceived positives, such as cost reductions, enhancing efficiency, and providing personalised consumer experiences, while minimising its perceived negatives, such as algorithmic biases and potential job displacement. Clearly AI is here to stay and will continue to revolutionise the fashion industry. 

Fashion companies and brands should look at AI as an asset and a means to unlock opportunities that propel their businesses forward. Meanwhile, models, designers and creatives should look at AI through the lens of a collaborative approach with brands, knowing that they are an essential component to bring authenticity, genuine emotion, personal experiences and character to the table. In combination, this will help create an enhanced and improved consumer experience. 

Jeffrey Greene is a partner with Foley & Lardner LLP based in the firm’s New York office where he co-chairs its fashion, apparel and beauty industry team and is a member of the firm’s intellectual property department. He can be reached at [email protected].

Email your news and story ideas to: [email protected]

Top