By Joshua Rich --
In the lawsuit brought against them for using visual artists' work to teach their large language model, and producing near-identical copies in response to prompts, Stability AI, Midjourney, DeviantArt, and Runway AI moved to dismiss almost all of the claims asserted against them. Those claims include copyright infringement, violations of the Digital Millenium Copyright Act ("DMCA"), unjust enrichment, violation of the Lanham Act for false endorsement and trade dress infringement, and, against DeviantArt alone, breach of contract. Judge Orrick of the U.S. District Court for the Northern District of California, although seemingly skeptical of the merits of some of the other claims, dismissed only the DMCA and breach of contract claims with prejudice and the unjust enrichment claims without prejudice and with leave to amend. Thus, the sprawling, heavily litigated case will go forward based on many different theories for recovery (including quite a few novel ones).
The case began when visual artists Sarah Andersen, Kelly McKernan, and Karla Ortiz[1] sued Stability AI, Midjourney, and DeviantArt on various claims arising out of the creation and use of the Stable Diffusion large language model. Allegedly, Stability AI developed Stable Diffusion and trained it through the use of LAION ("Large-Scale Artificial Intelligence Open Network") datasets that include the plaintiffs' artwork. Each of the defendants has then allegedly employed Stable Diffusion as the engine for the AI engine for its proprietary artwork-producing product, including in creating images "in the style" of the plaintiffs' art (some of which are nearly identical copies of existing works). The plaintiffs asserted claims against all three defendants for copyright infringement (both direct and inducement), violations of the DMCA, violation of the right of publicity, unfair competition, and declaratory relief, as well as breach of contract claims against DeviantArt.
All three of the initial defendants separately filed motions to dismiss the original Complaint, but raised similar arguments. As it turned out, Kelly McKernan and Karla Ortiz had not registered any of their copyrights, so they had to concede that they could not assert such claims. The court also narrowed Sarah Anderson's copyright claims to those that she had registered, but allowed those to go forward. However, the court had ruled that the factual pleadings for inducement of copyright infringement were too conclusory to support the claim, so dismissed those with leave to file an amended complaint with more factual basis. The same was true for the claims based on the DMCA, right to publicity, unfair competition, and breach of contract; Judge Orrick explained at length how the original Complaint had too little factual explanation for a defendant to identify whether there was a plausible cause of action stated in any of the claims, but allowed the plaintiffs to replead their allegations with more factual support so as to identify such claims.
Rather than merely adding more factual averments in support of the claims they already pled, the plaintiffs fundamentally changed the pleadings. They added seven more named plaintiffs and a new defendant, Runway AI, who they accuse of developing Stable Diffusion with Stability AI. They also changed the asserted causes of action, dropping some (right of publicity and declaratory relief) and adding others (violation of the Lanham Act). The ten named plaintiffs then averred additional facts in support of the claims, albeit not in all of the ways Judge Orrick identified the claims to be deficient.
Once again, each of the defendants moved to dismiss the claims asserted against them.[2] While most of the arguments overlapped, there were some differences based on the claims asserted against each defendant and their factual circumstances. Ultimately, based on the defendants' motions to dismiss, Judge Orrick dismissed the DMCA and breach of contract claims with prejudice and the unjust enrichment claims without prejudice and with leave to amend.
The first argument raised by Stability AI and DeviantArt was that adding more plaintiffs and new claims was not what Judge Orrick gave leave to do. Judge Orrick acknowledged that to be true and, while leave to amend is freely given, noted that the proper course of action would have been to request leave to add the new plaintiffs and claims. But after rapping the plaintiffs' knuckles for not doing so, he found that he would have granted leave for those changes had he been asked and allowed the new plaintiffs and claims to proceed.
Substantively, Midjourney raised an argument that bridged all of the copyright claims -- that certain plaintiffs' copyright registrations were insufficient to support the claims of infringement. Namely, two plaintiffs (Sarah Anderson and Julia Kaye) registered some their works as part of compilations and plaintiff Gerald Brom registered some of his works as text rather than artwork. But every plaintiff had at least one visual work (that is, a work of visual art) registered and asserted. Midjourney also argued that the plaintiffs had not identified all of the copyright registrations that were being asserted. But the Federal system requires only notice of plausible claims, not identification of all supporting facts. Therefore, Midjourney's arguments were not a basis for dismissing any claim (although any copyright not properly registered cannot be the basis for a claim of infringement).
Only one party, DeviantArt, argued that the First Amended Complaint completely failed to make out a claim for direct copyright infringement against it. It did not train the Stable Diffusion model and the plaintiffs had failed to identify facts sufficient to tie it to any infringement in the original Complaint. However, to do so it required the court to rely on a review of academic articles cited but not incorporated in the First Amended Complaint to determine the plausibility of the plaintiffs' allegations. That is simply asking too much of the court to demand a deep dive into the technology, far beyond the face of the First Amended Complaint, to determine the veracity of the allegations. Similarly, DeviantArt asked the court to find that use of the plaintiffs' works in Stable Diffusion was fair use. Those fact-based arguments are suited for a motion to dismiss, not a motion for summary judgment.
Although it did not contest that the plaintiffs had stated a claim for direct infringement through teaching the Stable Diffusion tool, Runway AI argued that certain theories of direct infringement related to Stable Diffusion 1.5 (namely that the model itself, after training, was an infringing copy of plaintiffs' works or that distributing Stable Diffusion 1.5 violated the plaintiffs' distribution rights) failed to state a claim. Since there was no dispute that at least one theory of direct infringement (that training Stable Diffusion on the plaintiffs' works infringed their copyrights) stated a claim, the court had no need to address other theories. However, it noted that proof of the theories was based on what facts could be proven, which is improper to resolve on a motion to dismiss.
All of the defendants argued that the claims for inducing copyright infringement were deficient. Two theories of induced infringement were advanced. First, as alleged against Stability AI and Runway AI, the Stable Diffusion models themselves were infringing works and their distribution (such as to Midjourney and DeviantArt) constitutes infringement. The defendants argued that was just a direct infringement claim repackaged under a different theory. Judge Orrick found, however, that the plaintiffs were entitled to plead the two forms of infringement in the alternative and to determine how Stable Diffusion works and is implemented by users. Any potential overlap can be resolved later, after discovery.
Second, the defendants argued that the plaintiffs failed to aver facts that would support a claim that that are encouraging others to use Stable Diffusion to create infringing outputs. The plaintiffs had identified a statement by Stability AI's CEO indicating that Stable Diffusion could "recreate" any of the images on which it had been taught,[3] as well as articles by academics and others identifying the fact that training images could sometimes be reproduced as outputs. Those facts took the case out of the VCR paradigm of Sony Corp. of America v. Universal City Studios, Inc., 464 U.S. 417 (1984), under which the marketing of a product that could be easily used to infringe copyrights -- but also capable of substantial noninfringing use -- would not create a presumed intent to cause infringement. That is, here, there was actual evidence that the marketing of Stable Diffusion was being done with the knowledge that it would likely facilitate infringement by others. Thus, Judge Orrick found that the plaintiffs had averred facts sufficient to go forward with a claim for inducement of copyright infringement.
Under the DMCA, the plaintiffs brought two different claims. Under § 1202(a), they asserted that the defendants were falsely alleging that they owned the copyrights in the plaintiffs' copyrighted works by claiming copyright in the Stability Diffusion large language model. The defendants countered by arguing that a claim of copyright in the Stable Diffusion large language model was not a claim of copyright "in connection with" any works produced by the large language model and that there was no allegation that they knowingly provided false copyright management information ("CMI") with the intent to induce copyright infringement. Under § 1202(b)(1), they argued the defendants intentionally removed or altered CMI. Judge Orrick found neither claim plausible. He found a viewer would not read the copyright license governing the Stable Diffusion model to necessarily apply to works produced by the model. And, consistent with earlier precedent in the same district,[4] he found none of the defendants "removed" or "altered" any CMI, they just did not affix CMI to AI-generated works nearly identical to existing works. Because the newly-generated works are not truly identical to the works used to train the model, he found § 1202(b)(1) did not require ensuring the CMI was on the works. Thus, Judge Orrick dismissed all of the DMCA claims with prejudice.
The defendants next moved to dismiss the plaintiffs' unjust enrichment because, they argued, the claims are preempted by the Copyright Act. To survive preemption, a state law cause of action (like unjust enrichment) must have some additional element that makes the protected rights qualitatively different from copyright rights. As the claims were pled, there was little dispute that the unjust enrichment claims did not include any extra elements beyond copyright infringement. Instead, the plaintiffs argued in briefing that the defendants were profiting off of the plaintiffs' reputations by mimicking their works based on prompts using the artists' names. While that theory might not be preempted, because it did not appear in the First Amended Complaint the parties did not have a fair opportunity to address it. Accordingly, Judge Orrick dismissed the unjust enrichment claim with leave to replead in a Second Amended Complaint, if the plaintiffs chose to do so.
Midjourney argued that the Lanham Act claim asserted by five of the plaintiffs, claiming that Midjourney falsely claimed that the artists had endorsed its product by Midjourney's CEO including them on a list of artists on the Discord platform that its tool could mimic and the company itself included user-created works incorporating the artists' names in its showcase. Midjourney argued that the plaintiffs failed to show falsity, relying on portions of a Discord thread not relied upon by the plaintiffs and requesting judicial notice of the evidence. That type of disputable evidence is exactly what judicial notice is not intended to permit. But more fundamentally, the fact that there is disputed evidence shows that dismissal is not appropriate; that argument is better presented at the summary judgment stage. Midjourney makes two other arguments that suffer the same deficiencies: that invocation of the plaintiffs' names to identify their styles have no artistic relevance to the underlying works and that it cannot be liable for vicarious trade dress infringement because the plaintiffs have not identified all of the hallmarks of their works. Again, these are factual arguments not resolvable as a matter of law, the standard at the motion to dismiss phase.
DeviantArt also moved to dismiss the breach of contract claim asserted against it. Essentially, the plaintiffs argued that the contractual provision between them indicating that it did not claim copyright in any of the artists' works was breached when DeviantArt incorporated Stable Diffusion into its own AI tool. Just as he had done before, however, Judge Orrick rejected the argument that DeviantArt had breached that provision, even if third parties (whether Stability AI and Runway AI in teaching Stable Diffusion or end users in using DeviantArt's tool) were potentially infringing. DeviantArt itself had not exceeded the scope of its limited license by them doing so. The plaintiffs further argued that DeviantArt had breached an implied covenant of good faith and fair dealing, but could not tie it to a specific contractual provision that was frustrated by DeviantArt's conduct. The breach of contract claims were therefore dismissed with prejudice.
The motions to dismiss were notable for being supported by requests from Runway AI and Midjourney (and DeviantArt, albeit informally) for the court to take judicial notice of briefing from other cases and academic articles mentioned in the First Amended Complaint. Judicial notice of pleadings is proper only to prove they exist and were filed, not to incorporate the arguments made in them. But that is exactly what the defendants were trying to do -- supplement their arguments with those made in the briefing and articles. Judge Orrick therefore refused to take judicial notice of the requested documents.
All in all, the proceedings on the motions to dismiss reveal quite a bit about the case. First, current statutory law and precedent is poorly fitted for the resolution of disputes over ownership of AI-generated non-textual creative works. The plaintiffs here have struggled to identify the appropriate claims to assert and the defendants have struggled to find good defenses to undercut them. Second, both plaintiffs and defendants will be throwing every argument they can at the other side, regardless of the strength of the argument, hoping the court will leverage it in their favor. Finally, Judge Orrick has made it clear that the critical inflection point in the case will be summary judgment, at which point it should be clear to everyone how he will rule on the merits.
[1] The lawsuit is brought as a putative class action, but there were only three named plaintiffs in the original Complaint. There are six different classes identified in the First Amended Complaint based on relief sought (injunctive relief or damages) and which dataset class members' works are found in. Given that class certification requires questions or law or fact common to the class(es) and the representative parties having claims typical of the class(es), the number of different subclasses is a bad omen for class certification. See Fed. R. Civ. P. 23(a).
[2] Other than DeviantArt, none of the defendants moved to dismiss the direct copyright infringement claims.
[3] Runway AI tried argue that Stability AI’s CEO’s remarks pointed a finger only at Stability AI, but since the two worked together on Stable Diffusion and there was other evidence of intent, Judge Orrick rejected that argument.
[4] Doe 1 v. GitHub, Inc., No. 22-CV-06823-JST, 2024 WL 235217 (N.D. Cal. Jan. 22, 2024) (Tigar, J.). Judge Orrick noted that other districts have found that large language models have and obligation to maintain the original creators' CMI on nearly identical works.
Did they really claim someone trained a language model with images?
Posted by: Jane Doe | August 30, 2024 at 02:01 PM