Takeaways from key cases of Generative AI lawsuits
- Cherish Carvalho
- 6 days ago
- 4 min read

Generative AI has taken the world by storm. It has permeated every sector and industry today. Most companies think of AI as the opportunity they have to pursue, for many reasons, i.e., AI is understood to bring efficiency, remove repetitive tasks, realign strategies, and largely increase revenue. Overall, AI has the potential to bring a lot of positive change, but it is important to ensure that AI is employed and used responsibly. Using AI responsibly is crucial for its sustained use and most importantly, it keeps organizations off the radar of regulatory bodies. In this article, we will look closely at some crucial lawsuits and cases of enforcement surrounding Generative AI to understand the different factors that affect these rulings, fines, and penalties, which can serve as a reference for companies, data privacy, information governance professionals and those planning to and currently incorporating Gen AI into their business processes.
Open AI | Garante | December 2024
Italian Data Protection Authority imposed a €15 million on OpenAI, in the first ever Generative AI-related case under the GDPR for using personal information without a proper legal basis for it.
Summary: In investigation initiated in March 2023, Garante claimed that Open AI processed users’ personal information (PI) to train ChatGPT without first identifying an appropriate legal basis for doing so. The DPA also alleged that OpenAI failed to provide mechanisms for age verification which has the potential to risk exposing children to inappropriate responses for their age. And finally, Garante found that OpenAI did not notify them of a data breach it underwent in March 2023.
Outcome: The Authority ordered OpenAI to carry out a 6-month communication campaign via radio, television, newspapers and the internet to ensure effective transparency with regard to the collection, and processing of personal data for the training of generative AI. The communication must also include information about data subjects’ rights, including right to object, rectify and delete their data.
Key Takeaways: Companies using personal data for training generative AI models should provide a legal basis for it. Appropriate and effective communication should be maintained so that users understand how AI systems are trained, how their data gets used and how they can exercise their rights.
Link to full resource: COMUNICATO STAMPA - ChatGPT, il Garante privacy chiude l’istruttoria.... - Garante Privacy
Microsoft, GitHub and Open AI | U.S. District Court | July 2024
Programmer and lawyer Matthew Butterick filed a lawsuit against Microsoft, its subsidiary GitHub and its business partner, OpenAI in the U.S. District Court, Northern District of California, for allegedly violation copyright laws.
Summary: In November 2022, Matthew Butterick filed a lawsuit against Microsoft, GitHub and OpenAI for their AI-powered coding assistant GitHub Copilot which generates code and helps developers write codes faster. This tool is trained using public repositories of code from the internet. However, many of these repositories are licensed, meaning that if the code is reused, then the creator should be credited. However, the lawsuit alleged that Copilot outputs licensed code without any credit, violating copyright laws.
Outcome: A judge in the District Court, however, dismissed 20 of the 22 original claims made against the three companies in a court order unsealed in July 2024. The judge determined that the code allegedly copied wasn’t similar enough to the original work. However, two of the claims still stand, one accusing the companies of open-source license violation and the other alleging breach of contract.
Key Takeaways: When using licensed data from the internet, generative AI models have to produce the correct credits for the output provided to avoid facing license violations.
Link to resource: The developers suing over GitHub Copilot got dealt a major blow in court | The Verge
It should also be noted that OpenAI is facing a number of copyright infringement lawsuits from various different parties such as the NY Times, Chicago Tribune, the Authors Guild, claiming that the company used their content and resources as training data to train their AI models. The NY Times, for example, alleges that OpenAI used millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews and how-to guides to train their data.
Stability AI | U.S. District Court, U.K. High Court | February 2023
Stock image provider Getty Images filed a lawsuit against AI company Stability AI Inc, known for its text to image model, Stable Diffusion, for allegedly using its photos to train their model
Summary: Getty Images filed a lawsuit in the District Court for the District of Delaware, against U.K. based AI company Stability for allegedly training their generative AI model, Stable Diffusion by copying millions of Getty’s photos without a license. Getty also claims that Stability infringed on its trademark by producing images with Getty’s trademark, confusing users.
Outcome: In the High Court of London, the first trial to determine liability is listed to commence on 9 June 2025. However, in the judges reserved judgement ahead of the trial, she refused to strike out the training and development, and secondary infringement claims.
Comments