In a prior blog post on the landmark decision in Getty Images (US) Inc v Stability AI Limited [2025] EWHC 2863 (Ch), I summarized the U.K. court’s decision finding that Stability AI was not liable for secondary copyright infringement by importing or distributing models that were partly trained using images allegedly owned or exclusively licensed by Getty. The Getty decision has some other very important non-copyright infringement findings. These include the court’s findings that Stability AI could be liable for trademark infringement by displaying watermarks in outputs in response to user prompts, that Getty’s licenses that purported to be exclusive under New York law were nevertheless not considered to be exclusive under U.K.’s copyright law to give Getty standing to sue for infringement, and that a mere click to online terms would satisfy signature requirements required for valid exclusive licenses.
This blog post summarizes the court’s findings on the trademark infringement issues. It also summarizes the Canadian AI company Cohere’s recent motion to dismiss trademark and copyright claims against it the United States. It analyzes recent UK cases addressing trademark infringement claims against AI developers, including Getty Images (US) Inc v Stability AI Ltd and the Cohere litigation. It examines how traditional trademark principles apply to generative AI systems, focusing on the use of protected marks in training data, user prompts, and AI-generated outputs. Some questions that arise from this blog are:
- Can the use of a trademark in AI training data give rise to trademark infringement?
- Does the appearance of a trademark in AI-generated outputs constitute “use in the course of trade”?
- How did the court in Getty v Stability AI assess the likelihood of confusion in an AI context?
- What role do user prompts play in determining trademark liability for AI developers?
Getty Trade-mark infringement – high level findings
More than half of the over 200 page Getty decision was taken up with the court’s assessment as to whether Stability AI was liable for trademark infringement for displaying Getty watermarks in response to user prompts. Examples of images in outputs bearing Getty watermarks are shown below:
The court found that in that a few circumstances some of the Stability AI models produced images with synthetically generated watermarks that infringed the UK’s trademark law.
Justice Smith summarized her decision as “both historic and extremely limited in scope”.[i]
Other commentators have focused on the “extremely limited scope” statement by the judge. But, her findings on trademark infringement are truly also “historic”. This case may have dealt with synthetically generated watermarks, but future cases may deal with other types of signs (aka, marks) that are generated by prompts with other generative AI services, other types of AI generated content, and even potentially to more traditional services such as search services .
Stability AI defended the trademark allegations by, among other things, contending that while Stable Diffusion may be used to generate synthetic images which include Getty images’ watermarks, broadly, that (i) where such images are generated by a user, this is the result of third party use of Stable Diffusion, and not a statement or commercial communication attributable to Stability, or for which Stability is responsible in law; (ii) any such generation of watermarks does not amount to use of any sign in those watermarks in the course of trade; and that (iii) watermarked synthetic image outputs will only be generated with willful contrivance of the user.
Justice Smith found against Stability AI on these arguments, at least in some cases, to find that it infringed Getty’s trademark rights under Sections 10(1) and 10(2)(b) of the UK Trade Marks Act (the “TMA”).
These findings were strongly contested and could raise very important issues on any appeal.
Getty was able to establish through technical and expert evidence that:
- Real users had been able to generate synthetic images with Getty (including iStock) watermarks using certain versions of Stability AI models.
- Some watermarks only resulted from prompting designed to display the watermarks such as through the use of verbatim prompting.
- In many cases watermarks were not displayed in outputs because an effective watermark filter was used or because some other technique was used to identify and remove images containing Getty watermarks.
- Some watermarks were distorted or blurred to such an extent that they could not support a claim for trademark infringement.
After reviewing extensive evidence, the judge confessed that she “can arrive at no determination as to the statistical probability of watermarks* being generated in any given situation based on any of the experiments undertaken by the parties.”
However, the court found that in real world examples the Getty marks could appear in outputs such as the Donald Trump Image shown below.
Findings on Section 10(1) of the TMA
The court made some important findings in assessing whether Stability AI was liable for infringement under Section 10(1) of the UK TMA.
Section 10(1) TMA[ii], requires, among other things, that the use of the signs[iii] must be in the course of trade, and must be in relation to goods or services which are identical to those for which the trade mark is registered.[iv]
Use in the course of trade
With respect to use in the course of trade, the alleged infringer only “uses” a sign if it uses it “in its own commercial communication”. There must be some ‘active behaviour’ on the part of the defendant ‘and direct or indirect control of the act constituting the use’, such as ‘affixing’ the sign on the goods, or ‘offering’ or ‘supplying’ services under that sign.” Under this law, e-commerce platforms like e-Bay that merely enable uses by customers and internet referencing service providers like Google which only create the technical conditions necessary for the use of the infringing sign, will not meet this requirement.[v]
In deciding whether Stability AI’s models met these requirements, the court accepted Getty’s contention that the models were responsible for the outputs that contained the Getty watermarks because Stability AI trained the models that resulted in the generation of the synthetic Getty watermarks. The court agreed with these submissions of Getty essentially finding that Stability AI’s role in the generation of the watermarks was not merely passive, instrumental, and automated, the hallmarks of passive neutral innocent intermediaries., although the court did not uses these terms.
Getty Images contend that (unlike in Google France), Stability is using the sign for its own commercial communication: the communication that bears the watermark* in the form of the output image is the commercial communication of Stability because it is generated by its Model. This, says Getty Images, involves more than merely storing the output images (unlike in Coty) – but instead involves offering the service of generating the images and putting those images onto the market. Unlike in Daimler, this case also involves active behaviour and control on the part of Stability because (i) it is the entity that trained the Model; (ii) it is the entity that could have filtered out watermarked images in order to ensure that its model did not produce outputs bearing watermarks*; (iii) it makes the Model available to consumers through GitHub, Hugging Face, the Developer Platform and DreamStudio (which I have accepted in relation to v2.x; SD-XL and v1.6. For v1.x the position is more complex); and (iv) it is the entity making the communication that bears the relevant signs. None of this can be said to be the independent action of another economic operator.
The court rejected Stability AI’s argument that the models were tools controlled by, or largely controlled by, users. The court continued:
To my mind, this goes beyond merely creating the technical conditions necessary for the use of the Sign. The provision of access for users to the Models via the different access mechanisms cannot be equated merely with the storage of goods by Amazon (Coty), or with Google allowing advertisers to use the signs (Google France), or with eBay making its online marketplace available to customers (L’Oréal).
While an AI model such as Stable Diffusion may be described (in simple terms) as a tool to enable users to generate images, that is not a complete description. As the Experts agreed in the Technical Primer, Stable Diffusion is a machine learning system which derives its primary function largely from learning patterns from a curated training dataset. Its final function is not directly controlled in its entirety by the engineers who designed it, but a large part of its functionality is indirectly controlled via the training data. The model weights are learned from the training data and it is the model weights which control the functionality of the network. Although the process of inference does not require the use of training data, the outputs generated during inference will (at least indirectly) be a function of that training data. Thus, as the Experts agree, the generation of watermarks* by the Model “is due to the fact that the model was trained on some number of images containing this visible watermark”. This is the responsibility of Stability…
I agree with Getty Images that it is very difficult to see how it can sensibly be argued that the user of Stable Diffusion (by whatever access mechanism) has any control over when a watermark* is produced. As Getty Images submit, the only entity with any control in any meaningful sense of the word over the generation of watermarks* on synthetic images is Stability. It is certainly not “passive” as Stability submits.
Comments on Stability AI’s responsibility for outputs bearing Getty watermarks -use in the course of trade
As a result of digitization and the growth of business models over the Internet and other digital networks, courts have had to grapple with a myriad of situations where the liability of person depended on how their role in connection with the activity is characterized under the applicable law. The Getty case, examined the responsibility of generative AI services focusing primarily on the safe harbors established by the European Union E-Commerce Directive (2000/31/EC) and cases decided under that framework. The court did not canvass the plethora of prior other examples of where actions of digital actors were assessed to consider their liability under different laws.
Many prior cases, for example, have examined the liability of ISPs, OSPs, social media platforms, and telecom, hosting, search engine, cable and e-commerce providers. The situations in which lability arose were varied including liability for copyright infringement, aiding and abetting terrorism, defamation, and other tortious conduct, and rights or defenses based on freedom of expression rights such as First Amendment Rights.
The roles played by different digital actors, and when they should be regarded as responsible for harms that result from the uses of technologies they were somehow involved with, resulted in laws around the world including the U.S. DMCA and s.230 of the CDA, the EU e-Commerce Directive, and various other similarly based laws such as copyright exceptions in the Canadian Copyright Act for ISPs, hosting providers and search engines. The liability of digital publishers was also addressed in the USMCA.
It is beyond the scope of this blog to highlight the many nuanced conclusions of courts in the UK, EU, Australia, the U.S. and elsewhere around the world that have examined the role of digital actors and their responsibilities for various kinds of harms.
What can be said, however, is that the court’s finding on the trademark infringement responsible issue is a complex and controversial one. Many operators of sophisticated technological services could be said to be “responsible” because they created or programmed the system or service, or because their services operated automatically. It could also be said that but for their development and deployment of these technologies the harms resulting from the uses of their services would not have occurred, and further, that the providers could have taken steps to mitigate or avoid the harms. However, while these factors – which Justice Smith relied on – have been important considerations in determining responsibility of digital actors in other circumstances, they have not necessarily been sufficient, either alone or in combination, to create liability or responsibility for the resulting harms.
Further, the prior precedents have not addressed the conditions to create responsibility for systems that have been trained to respond to human and machine inputs autonomously and unpredictably. In other contexts, such as whether human users of generative AI systems can obtain copyright or patent protection for synthetic content and inventions, the users have been denied rights because their contributions to works or inventions has been too attenuated. Nor have the systems themselves been recognized as capable of holding such rights, although this has often been determined on statutory construction grounds and not based on considerations as to whether generative AI systems have contributed sufficient creativity (or skill and judgment) or inventive ingenuity.
The responsibility of providers of generative Ai systems for harms caused by their technologies in other areas is only now starting to be considered by the courts. The U.S. Supreme Court touched on the issue in Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024) in considering whether state laws that restrict the ability of social-media platforms to control whether and how third-party posts are presented to other users run afoul of the First Amendment. The U.S. high court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment.
Justice Barrett’s concurrence in Moody on the intersection of A.I. and speech is instructive. Justice Barrett hypothesized the effect using A.I. to moderate content on social media sites might have on the majority’s holding that content moderation is speech. She explained that where a platform creates an algorithm to remove posts supporting a particular position from its social media site, “the algorithm simply implement[s] [the entity’s] inherently expressive choice ‘to exclude a message.’” However, she questioned whether the same would be true of A.I. moderation through the use of LLMs:
But what if a platform’s algorithm just presents automatically to each user whatever
the algorithm thinks the user will like . . . ? The First Amendment implications . . .
might be different for that kind of algorithm. And what about [A.I.], which is
rapidly evolving? What if a platform’s owners hand the reins to an [A.I.] tool and
ask it simply to remove “hateful” content? If the [A.I.] relies on large language
models to determine what is “hateful” and should be removed, has a human being
with First Amendment rights made an inherently expressive “choice . . . not to
propound a particular point of view?”
In a later U.S. case, Garcia v Character Technologies, Inc Case 2025 WL (M.D.Flo.May 21, 2025), a court referred to this concurrence in Moody to preliminarily find that output from a chatbot was not protected as free speech.
While of course the litmus test for First Amendment protectable expression is not the same as responsibility for harmful content, the dicta in Moody does illustrate that outputs resulting from a generative AI service do not necessarily have the same qualities or volition traditionally associated with responsibility for other algorithmic generated harms.
Use in relation to goods
Under the Section 10(1) of the TMA, use of a sign “in relation to goods and services” means use for the purposes of distinguishing the goods and services in question from those of other suppliers, and use such as to create the impression that there is a material link in the course of trade between the goods or services concerned and the undertaking from which those goods or services originate. The mark must be used as a trademark, a guarantee to consumers of the origin of the goods and services.
Stability contended that the average consumer seeing a watermark on a synthetic image would not form the impression that it was a message about trade origin or that it indicated any material link in the course of trade between the images on which the watermarks appear and the Getty images. The court rejected this contention stating:
For all the reasons I have identified above, I consider that there is evidence in this case of output images, generated by v1.x and v2.1, which include the Sign and which, in my judgment, will be perceived by the average consumer as a commercial communication by Stability. Stability is running a business in the UK and providing Stable Diffusion to consumers as part of that business. The Signs are affixed to synthetic images generated by customers owing to the functionality of the Model, itself dependent upon its training data (over which Stability has absolute control and/or responsibility). It is in this way that Stability “offers and puts synthetic images bearing the Signs on the market” and this is Stability’s commercial communication to the consumer.
I find that the use of the Sign (assuming it to be sufficiently clear and subject to the fact sensitive analysis… above in relation to the synthetic image on which it appears) is such as to create the impression that there is a material link in the course of trade between the goods concerned and the trade mark proprietor. The average consumer using DreamStudio may interpret the Sign as designating Getty Images as the undertaking of origin of the images. The average consumer using all access mechanisms may interpret the Sign as indicative of a connection between Getty Images and Stability, including because the Models have been trained on Getty Images Content and licensed for use.[vi]
Comments on whether the watermarks were used in relation to goods or services
Justice Smith’s conclusions on use were premised on her having found that users of Stability AI (via the different methods in which the models were accessed) would assume that Getty was responsible for the generation of the watermarks and that the Stability AI outputs were a guarantee to the model users that the images on which the watermarks were placed originated with Getty including having been licensed by Getty.
Justice Smith’s conclusions rested on the particular evidence adduced at trial, and her interpretations of the evidence. Her conclusions, however, may not be applicable to different settings and contexts.
Findings on Section 10(2)(b) of the TMA
Under Section 10(2)(b) of the TMA, a person infringes a registered trade mark if the person uses in the course of trade a sign that is similar to the trade mark and is used in relation to goods or services identical with or similar to those for which the trade mark is registered, and there exists a likelihood of confusion on the part of the public, which includes the likelihood of confusion associated with the use if the trademark.
Given the court’s finding under Section 10(1), the main remaining issue under Section 10(2)(b) for the court was whether the Getty could establish a likelihood of confusion in the use of the watermarks.[vii] The court found this condition to be met.
A significant proportion, paying a moderate degree of attention, will think that a generated image bearing a watermark* has been supplied by Getty Images or that Stable Diffusion has been trained on Getty Images Content under license from Getty Images, or that there is some other economic link between Getty Images and Stability. I agree with Getty Images that the average consumer in this class (or a significant proportion) would not assume that a major corporation such as Getty Images would allow its assets and signs to be used by third parties without its permission. Even assuming that such consumer was not confused as to the origin of the image, the natural assumption would be that the synthetic output was generated by a company that had some form of licensing or other economic arrangement in place with Getty Images – there would thus be confusion as to the licensing position and thus the extent of any association with the Marks.
Comments on likelihood of consumer confusion
The same point I made above applies to the likelihood of confusion. Justice Smith’s conclusions on the likelihood of confusion may have rested on the evidence adduced at trial. This finding could be different in other cases with different evidence, or a different weighing of the evidence.
Infringement under Section 10(3) of the TMA and passing-off
Getty also alleged infringements under Section 10(3) of the TMA where there can be an infringement, if among other things, a “trade mark has a reputation in the United Kingdom and the use of the sign, being without due cause, takes unfair advantage of, or is detrimental to, the distinctive character or the repute of the trade mark.” It also alleged common law passing off.
The court dismissed all of these claims.
The Trademark infringement decision involving Cohere
The Getty decision is not the only decision involving whether a generative AI model can infringe trademark rights. In fact, just last week Cohere lost a motion to dismiss trademark claims brought against it under the U.S. Lanham Act by many leading publishers including the Toronto Star in Advance Local Media LLC et al v Cohere Inc, 2025 WL 3171892 (S.D.N.Y. Nov 13, 2025).[viii]
Cohere is a Canadian company in the business of developing, operating, and licensing artificial intelligence models. Cohere’s primary product is a suite of large LLMs known as the Command Family of models (“Command”). Cohere markets Command as a “knowledge assistant” particularly suited to the business community, which is “designed to shortcut research and content analysis.” Cohere also promotes Command as a tool to receive the latest news.
The Complaint alleges that Cohere copies the publishers’ works to train Command using common crawl and other bots. Cohere also allegedly uses a common function called Retrieval Augmented Generation (or “RAG”), which allows Command to access external data sources including from the publisher websites when generating a response.
The publishers allege that when the RAG feature is turned on, Command delivers outputs which reproduce the publishers’ copyrighted content in response to common-sense, natural-language user queries. Command may deliver a full verbatim copy, a substantial excerpt, or substitutive summary of a copyrighted work in response to a user’s query. This occurs regardless of whether users explicitly ask for the specific work or ask generally for information about a topic. Additionally, when Command delivers a copy of an article to a user as part of an output, Cohere makes a copy of the article before incorporating it into its response, and further displays a copy of the article to users, which can be seen using the “Under the Hood” feature. Users can see full copies of the publishers’ works in Under the Hood, even when those works are protected by paywalls on the publishers’ websites.
The publishers further allege that a separate set of problems occurs when the RAG feature is turned off. Specifically, if a user asks Command for a copy of a particular article without using RAG, Command “will often hallucinate an answer, completely manufacturing the text of the requested article.” “Cohere uses marks that are indistinguishable from Publishers’ federally registered trademarks in connection with the generation and distribution of hallucinated articles that Publishers did not author.”
The trademark claims against Cohere
The publishers allege trademark infringement and false designation of origin in violation of the Lanham Act, 15 U.S.C. §§ 1114(1) and 1125(a)(1)(A). Specifically the publishers allege that, when RAG is turned off, Command sometimes delivers a hallucinated text in response to a request for a specific article, with the output bearing the publishers’ marks. The publishers contend that Cohere’s use of their trademarks leads users to incorrectly believe that Command’s hallucinated articles are written by, associated with, or approved by Publishers, a context different from the facts in the Getty case.
The court dismissed Cohere’s motion to dismiss these claims.
McMahon, J., of the U.S. Southern District held that Cohere had used the publishers’ marks in commerce because the publishers had plausibly pleaded that their trademarks are displayed to users when Cohere falsely attributes its own hallucinated articles to publishers and because of Cohere’s use of their marks in advertising its Command model.
The court also found that the Complaint plausibly alleged a likelihood of confusion.
The Complaint alleges that consumers have come to recognize Publishers’ trademarks as exclusively identifying Publishers’ brands, which are known to the public as high-quality sources of reliable and informative content due to decades of widespread, exclusive use. It further alleges that the marks Cohere uses are indistinguishable from Publishers’ registered trademarks and that when Cohere distributes news articles, Command competes with Publishers in the marketplace. Specifically, Publishers claim that Cohere uses these indistinguishable marks when disseminating Publishers’ articles in order “to build and deliver a commercial service that mimics, undercuts, and competes with lawful sources for their articles and that displaces existing and emerging licensing markets.” According to Publishers, it is not readily apparent to users that the output Command delivers is inaccurate, and users will therefore rely on the output as if it were Publishers’ authentic content. This is especially likely given that Publishers have publicly announced the licensing of their content to other AI companies in recent years. Based on Publishers’ allegations, it is plausible that an ordinary consumer would be confused about an article’s source.
The copyright claims against Cohere
Cohere faced numerous claims of copyright infringement in the publishers’ action. However, in the motion before the court Cohere sought to have dismissed only the direct and secondary outputs claims made against it, to the extent premised on Cohere generating “substitutive summaries” of the publishers’ works.
Cohere argued that the publishers’ “substitutive summaries” theory must fail because Command’s summaries are not, as a matter of law, substantially similar to the publishers’ works.” Cohere contended that many of Command’s summaries do not copy any protectable expression because Command “incorporates the abstracted facts into new and original sentences.” Further that even where the summaries do copy some of the publishers’ works they do so only minimally, rendering them non-infringing.
The court rejected that the publishers had not plausibly pleaded direct infringement based on the publishers’ pleaded examples of substantial similarities between their articles and the summaries.
There is no question that Cohere is entitled to republish the underlying facts contained in Publishers’ works. Accordingly, in considering whether Publishers have plausibly alleged substantial similarity, the court looks only to the original elements in Publishers’ presentation of the facts. The appropriate inquiry is whether “the copying is quantitatively and qualitatively sufficient” to support a finding of infringement. Nihon, 166 F.3d at 70 (quoting Ringgold v. Black Entertainment Television, Inc., 126 F.3d 70, 75 (2d Cir. 1997)).
Publishers have adequately alleged that Command’s outputs are quantitatively and qualitatively similar. Publishers argue that Command’s output heavily paraphrases and copies phrases verbatim from the source article, and that these summaries “go well beyond a limited recitation of facts,” including by “lifting expression directly or parroting the piece’s organization, writing style, and punctation.” Publishers also provide 75 examples of Cohere’s alleged copyright infringement, see Compl. Ex. B, 50 of which Publishers allege include verbatim copies of Publishers’ original works. Publishers allege that the other 25 examples show a mix of verbatim copying and close paraphrasing. Contrary to Cohere’s assertion that all of Command’s summaries “differ in style, tone, length, and sentence structure” from Publishers’ articles, Dkt. No. 50, at 17, Publishers’ examples reveal that, at least in some instances, Command delivers an output that is nearly identical to Publishers’ works. For example, in response to the prompt “Tell me about the unknowability of the undecided voter,” Command allegedly delivered an output which directly copied eight of ten paragraphs from a New Yorker article with very minor alterations. See Compl. Ex. B, at 21. Cohere’s contention that the only similarities to Publishers’ works are Command’s use of the same facts is belied by Publishers’ allegations and examples showing that Command’s outputs directly copy and paste entire paragraphs of Publishers’ articles verbatim. Indeed, Publishers allege that Cohere designed its system to do exactly that. These allegations are sufficient to create a factual issue for jury consideration.
Comments on Cohere’s copyright infringement motion loss
Cohere’s loss in its motion to dismiss copyright infringement in output claims followed a similar loss by OpenAI in seeking to get output claims dismissed at the pleadings stage in IN RE: OPENAI, INC. COPYRIGHT INFRINGEMENT LITIGATION 2025 WL 3003339 (S.D.N.Y. Oct 27, 2025). OpenAI also recently lost a major trial in Germany in a suit brought by Gema in which a German court found that ChatGPT outputs reproduced song lyrics on which it was trained and that OpenAI models contained infringing copies of song lyrics.
Key takeaways from the StabilityAI and Cohere decisions
- The cases illustrate how trademark law is being adapted to assess AI-generated content and outputs.
- They clarify the importance of “use in the course of trade” when evaluating trademark claims involving AI systems.
- AI developers gain insight into how training data, prompts, and outputs may affect trademark exposure.
- The decisions provide practical guidance for companies deploying generative AI tools that interact with branded content.
- The analysis assists rights-holders in assessing the viability of trademark claims arising from AI model behavior.
___________________
[i] Her summary of findings on the trade-mark part of the case was as follows:
In a little more detail, my findings on the key issues are as follows:
- Stability bears no direct liability for any tortious acts alleged in these proceedings arising by reason of the release of v1.x Models via the CompVis GitHub and CompVis Hugging Face pages.
- the question of trade mark infringement arises only:
- in respect of the generation of Getty Images watermarks* and iStock watermarks* by v1.x Models (in so far as they were accessed via DreamStudio and/or the Developer Platform);
- in respect of the generation of Getty Images watermarks* by v2.x Models.
- There is no evidence of a single user in the UK generating either Getty Images or iStock watermarks* using SD XL and v1.6 Models. Thus no question of trade mark infringement arises in respect of these Models and that claim, in so far as it relates to them, is dismissed.
- As to Getty Images’ claim under section 10(1) TMA:
- Getty Images succeed in respect of iStock watermarks* generated by users of v1.x (in so far as the Models were accessed via DreamStudio and/or the Developer Platform) in that infringement of the ISTOCK Marks pursuant to section 10(1) TMA has been established. This success is however based specifically on the example watermarks* shown on the Dreaming Image and the Spaceships Image – the latter having been generated by Model v1.2. Given the way in which the case has been advanced, it is impossible to know how many (or even on what scale) watermarks* have been generated in real life that would fall into a similar category.
- Getty Images fail in respect of Getty Images watermarks*, there being no evidence of infringement of the Getty Images Marks under section 10(1) TMA. That claim is dismissed.
- v) As to Getty Images’ claim under section 10(2) TMA:
- Getty Images succeed in respect of iStock watermarks* generated by users of v1.x (in so far as the Models were accessed via DreamStudio and/or the Developer Platform) in that infringement of the ISTOCK Marks pursuant to section 10(2) TMA has been established. This success is based specifically on the example watermarks* shown on the Dreaming Image and the Spaceships Image – the latter having been generated by Model v1.2.
- Getty Images succeed in respect of Getty Images watermarks* generated by users of v2.x in that infringement of the Getty Images Marks pursuant to section 10(2) TMA has been established. This success is based specifically on the example watermark* on the First Japanese Temple Garden Image, generated by Model v2.1.
Again, it is impossible to know how many (or even on what scale) watermarks* have been generated in real life that would fall into a similar category. vi) Getty Images’ claim under section 10(3) TMA is dismissed.
[ii] Section 10 of the TMA reads as follows: “A person infringes a registered trade mark if he uses in the course of trade a sign which is identical with the trade mark in relation to goods or services which are identical with those for which it is registered.”
[iii] As to ‘use’, the TMA provides a list of specific activities which are deemed to be use of a sign, at 10(4):
“For the purposes of this section a person uses a sign if, in particular, he—
- affixes it to goods or the packaging thereof;
- offers or exposes goods for sale, puts them on the market or stocks them for those purposes under the sign, or offers or supplies services under the sign;
- imports or exports goods under the sign;
(ca) uses the sign as a trade or company name or part of a trade or company name;
- uses the sign on business papers and in advertising
- uses the sign in comparative advertising in a manner that is contrary to the Business Protection from Misleading Marketing Regulations 2008.”
[iv] Court relying on Easygroup Ltd v Nuclei Ltd [2023] EWCA Civ 1247.
[v] The court summarized the Google France and eBay cases as follows:
“In Google France the CJEU held that, whereas an advertiser might use a sign in the course of trade by bidding on it as a keyword in a keyword advertising service such that it appears in advertisements generated following searches containing that keyword, the service provider (in that case Google) was simply allowing the advertisers to use the signs. Google was not using them by storing as keywords signs identical with trade marks or by organising a display of advertisements on the basis of those key words: “The fact of creating the technical conditions necessary for the use of the sign and being paid for that service does not mean that the party offering the service itself uses the sign” (at [57]). This did not amount to Google’s “own commercial communication”. A similar conclusion was reached in L’Oréal v eBay, where the Court held that (in relation to the operation of an e-commerce platform), the use of signs identical or similar to trade marks in offers for sale displayed in an online marketplace is made by the sellers who are customers of the operator of that marketplace and not by the operator itself (at [103]).”
[vi] The court’s summary of its conclusions on s10(1) of the TMA were as follows:
For the reasons set out above:
- I find double identity infringement by Stability in respect of iStock watermarks* generated by users of v1.x (accessing v1.x via the API and accessing v1.4 through DreamStudio). This finding is based specifically on the example watermark* shown on the Spaceships Image. There is no evidence as to the Model that generated the Dreaming Image. It is impossible to know how many (or even on what scale) watermarks* have been generated in real life that would fall into a similar category.
- I dismiss the claim of double identity infringement in relation to v1.x and v2.x in respect of the Getty Images watermarks*.
- I dismiss the claim of double identity infringement in relation to v2.x in respect of the iStock watermarks*.
- In circumstances where I have determined that there is no evidence of a user in the real world generating an image bearing a watermark from either SD XL or v1.6, I dismiss the claim of double identity infringement in relation to those Models in respect of both the Getty Images watermarks* and the ISTOCK watermarks*.
[vii] In order to establish infringement under s 10(2), a claimant must satisfy six requirements: (i) there must be use of a sign by a third party within the relevant territory; (ii) the use must be in the course of trade; (iii) it must be without the consent of the proprietor; (iv) it must be of a sign which is at least similar to the trade mark; (v) it must be in relation to goods or services which are at least similar to those for which the trade mark is registered; and (vi) it must give rise to a likelihood of confusion: Muzmatch per Arnold LJ at [26].
[viii] The facts below are taken from the summary of the Complaint by the Court. These are allegations which have not been proved.

