The decision could reshape how AI companies operate across Europe.
The announcement:
The Munich Regional Court ruled Tuesday that OpenAI violated copyright law by using song lyrics to train ChatGPT without licenses and by reproducing those lyrics when users prompt the chatbot.
Presiding Judge Elke Schwager ordered OpenAI to pay damages to the artists whose work was used without authorization.
The case was filed in November 2024 by GEMA, Germany's music rights society representing over 100,000 composers, songwriters, and publishers, on behalf of artists behind nine German songs, including work by best-selling musician Herbert Groenemeyer.
The court's ruling was explicit: "Both the memorisation in the language models and the reproduction of the song lyrics in the chatbot's outputs constitute infringements of copyright law."
What the ruling establishes:
→ Training is infringement: Simply storing copyrighted content in AI models—even if never directly output—violates copyright law
→ Output is also infringement: When ChatGPT reproduces song lyrics in responses to user prompts, that constitutes separate copyright violation
→ Compensation is required: Artists are entitled to damages both for the memorization during training and for reproduction in outputs
→ Licensing framework needed: AI developers must purchase licenses and pay creators before using their work for training or output
→ Precedent potential: As the first copyright decision against OpenAI, this could influence how generative AI is regulated across Europe
OpenAI's response:
"We disagree with the ruling and are considering next steps," OpenAI said in a statement. "The decision is for a limited set of lyrics and does not impact the millions of people, businesses and developers in Germany that use our technology every day."
The company added that it "respects the rights of creators and content owners" and is having "productive conversations with many organisations around the world" about licensing.
Why this matters:
🎯 Two-part infringement changes everything: Previous debates focused on whether training on copyrighted content constitutes infringement. This ruling says yes—and adds that outputs are a separate violation. AI companies face dual liability.
💰 The licensing business model becomes mandatory: If this precedent spreads, AI companies can't operate without licensing agreements for training data. That transforms content owners from victims to vendors with negotiating power.
⚡ European regulation takes concrete shape: While U.S. copyright cases against OpenAI remain pending, Europe is moving faster with enforceable rulings. This creates regulatory fragmentation that AI companies must navigate.
🌍 GEMA represents 100,000+ creators: This isn't one artist suing, it's a collective rights organization with massive membership. Similar organizations exist across Europe, and they're paying attention to this precedent.
What this means for businesses:
🚀 Content licensing becomes AI infrastructure: If you create AI tools, assume you'll need to license training data—not as a nice-to-have, but as a legal requirement. Budget for it like you budget for cloud computing.
💼 Your copyrighted content has new value: If you own substantial content libraries, articles, images, music, video, AI companies may need to license it for training. This creates new revenue streams from existing assets.
📊 Documentation of data sources matters: Companies using AI need to understand what data their models were trained on. If your vendor can't document proper licensing, you may inherit their legal liability.
⚖️ Geographic differences create compliance complexity: A model that's legal to train and operate in the U.S. might violate copyright in Europe. Global businesses face fragmented regulatory requirements.
🛡️ "Fair use" arguments are failing: OpenAI's defense that GEMA "misunderstood how ChatGPT works" didn't persuade the court. Technical explanations of AI training aren't swaying judges on copyright questions.
The bottom line:
This ruling matters because it establishes that AI copyright infringement happens at two distinct points: during training (memorization) and during operation (output). AI companies can't argue "we only trained on it, we don't reproduce it" or vice versa, both constitute violations requiring licenses and compensation.
The decision directly contradicts the approach most AI companies have taken: train on everything available, argue it's transformative use, fight in court if challenged. That strategy just failed in a major European market.
GEMA is seeking to establish a licensing framework requiring AI developers to pay for musical works in both training and output. If this model spreads beyond Germany, and similar collective rights organizations exist throughout Europe, AI companies face a fundamental business model shift.
The German Journalists' Association called this "a milestone victory for copyright law," suggesting media organizations are watching closely and considering similar actions.
For businesses, the immediate question isn't whether this ruling is right or wrong, it's whether your AI tools and vendors are operating with proper licenses in jurisdictions where you do business. The compliance landscape just got significantly more complex.
OpenAI faces multiple copyright lawsuits in the United States from media groups, authors, and other creators. Those cases remain pending. But this German ruling provides the first concrete legal precedent that courts will side with creators over AI companies on both training and output.
Whether OpenAI appeals or begins negotiating licenses with European rights organizations will signal how seriously the company takes this shift. If they appeal and lose again, the precedent strengthens. If they license, they validate the ruling and open the door for similar demands from other content owners.
Your take: Should AI companies need to license every piece of content used in training, or does that kill innovation before it scales? 🤔