Navigating Copyright in the Age of AI: Insights from Adobe's Firefly Controversy
By Shradhanjali Sarma
Introduction
The emergence of Generative AI has sparked a new wave of excitement, akin to witnessing magic. Its capability to produce astonishing images and visuals is truly remarkable. With just a few text or image prompts, this technology can conjure a diverse array of photos, ranging from pencil drawings to watercolour masterpieces. Social media platforms are inundated with AI-generated pictures, possessing such nuance and detail that they often resemble genuine photographs. Recently, an art exhibition hosted in NYC was solely for AI generated art. The exhibition showcased a collection of 140 pieces created by 100 artists, with an objective to ignite conversations around the future of art created through Generative AI and its impact on traditional art.
As the art community enthusiastically embraces technological advancements in the form of use of AI in the generation of artwork, it's imperative to address the legal implications associated with the generation of such images. Adobe's Firefly recently faced scrutiny regarding the copyright status of images used to train its AI models for image generation. In response, Adobe asserted that there are no copyright concerns. Furthermore, the company committed to providing legal compensation to businesses in the event of copyright infringement lawsuits arising from the images created by Firefly.
Adobe's Firefly, an AI-powered image maker application tailored for enterprise clients, has stirred controversy regarding the origin of its training images. While Adobe claims that the images are sourced from their own stock and publicly available sources, this assertion has been met with skepticism. Many argue that a significant portion of the training images comes from Midjourney, another AI-powered image generator. Furthermore, reports have surfaced indicating that images were sourced from the web, raising concerns about copyright infringement.
The Adobe case underscores a prevalent issue in Generative AI image tools: the use of copyrighted images. Users of such applications often lack insight into whether the training images used by the company possess the necessary rights for usage. Consequently, users may unwittingly generate images that could result in copyright claims, leaving them liable. This precise dilemma fueled the debate surrounding the Adobe issue. However, Adobe maintains confidence in the absence of copyright issues and has provided indemnity to all clients, assuring them of the tool's commercial safety.
The IP Problem in AI
Legal frameworks across various jurisdictions are actively grappling with the complexities of the Generative AI ecosystem and its associated legal ramifications. Numerous unresolved issues underscore the ongoing discourse surrounding Generative AI and intellectual property rights. These include questions regarding ownership of images generated by AI-powered tools, whether creators obtain permission before training AI models using images, and the licensing status of the images used in such processes. For example, a user using Midjourney for image generation has no idea of whether the images used to train the model were copyrighted or whether the creator of the AI tool took permission from the original artists. The image generator will provide an image to the user based on the images used during training, thus generating an image which will have components of several original images. In such a case, who is the owner of the generated image? Is it the user or the company who created the tool? Can the original owner of the images bring a claim against the user? Who will indemnify the user in such a case.
Several lawsuits have been filed regarding IP infringement issues of Generative AI. In Zhang v Google LLC, A group of visual artists filed a class complaint against Google and Alphabet concerning the text-to-image diffusion models Imagen, Imagen 2, and multi-modal models like Gemini. The complaint alleges that Google utilized the LAION dataset, an open dataset for images, leading to accusations of direct copyright infringement against Google and vicarious copyright infringement against Alphabet. In another case, Anderson v. StabilityAI, Sarah Anderson initiated legal proceedings against Midjourney, StabilityAI, and DeviantArt for unauthorized use of her and other artists' images in training their AI models. Anderson and the other artists argued that when the AI tools of these companies generate "new images" solely based on the training images, they are effectively creating infringing derivative works.
These cases prompt legal experts to contemplate whether image generation by AI-powered tools should be exempted from copyright infringement under the fair-use doctrine. The fair-use doctrine permits the use of copyrighted material without the owner's permission for purposes such as criticism, satire, commentary, news reporting, teaching, scholarship, or research, as well as for transformative uses of the copyrighted material. The central question is whether images generated by AI tools like Midjourney, StabilityAI, and Adobe Firefly qualify as transformative works under the fair-use doctrine. One such example is that of Google, where Google scanned millions of books submitted by libraries, made them searchable online, and displayed "snippets" of text containing searched terms. Authors claimed this violated their copyrights. They sought damages, injunctions, and declarations of infringement. Google defended themselves and stated that such activities were covered under the fair use doctrine.
Conclusion
In the AI realm, technological advancements have outpaced legal developments, leaving gaps in addressing critical legal questions. As the ecosystem expands, a legal void persists until court judgments establish precedential value. During this interim period, businesses must ensure that their model training adheres to legal compliance. Additionally, they should maintain visibility into how AI trainers operate to mitigate risks and streamline processes.