Leading publishers sue Meta for alleged copyright infringement, claiming their works were used without permission to train Llama AI models.
A consortium of prominent publishing houses has initiated legal action against Meta Platforms, alleging widespread copyright infringement in the development of its artificial intelligence models. This landmark lawsuit, filed in Manhattan federal court, centers on claims that Meta unlawfully leveraged millions of copyrighted works, ranging from academic textbooks to popular novels, to train its sophisticated Llama large language models.
The plaintiffs, including industry giants Hachette, Macmillan, Elsevier, Cengage, and McGraw Hill, alongside acclaimed author Scott Turow, assert that Meta's AI training practices constitute unauthorized use of their intellectual property. The proposed class-action complaint details how Meta allegedly 'pirated' these extensive collections of literary and educational content without obtaining the necessary permissions or licenses. This unauthorized data, they argue, was then fed into Meta's AI systems, enabling the Llama models to generate responses to human prompts, effectively benefiting from content that was not legally acquired.
This legal challenge underscores a growing tension between content creators and technology developers in the rapidly evolving AI landscape. Publishers and authors are increasingly concerned about the appropriation of their copyrighted material for AI training purposes without fair compensation or acknowledgment. The outcome of this case could set a significant precedent for how AI developers acquire and utilize data, potentially reshaping the future of intellectual property rights in the age of artificial intelligence.
The lawsuit seeks to hold Meta accountable for its alleged actions, demanding redress for the unauthorized use of their valuable intellectual assets. As the legal battle unfolds, it will undoubtedly draw significant attention from both the publishing and technology sectors, highlighting critical questions about fair use, copyright protection, and the ethical development of artificial intelligence.
Major publishers sue Meta for copyright infringement over AI training
84.38%

Apple has agreed to pay $250 million to settle a class-action lawsuit alleging it misled iPhone buyers about Siri's artificial intelligence capabilities. The settlement, which covers approximately 36 million devices, addresses claims that Apple exaggerated Siri's AI features to boost sales, promoting functionalities that were not yet available. Plaintiffs argued these advanced AI capabilities, including a 'personalized' Siri, were not present at the time of purchase and remain unreleased. While Apple denies wrongdoing, this resolution highlights the growing scrutiny over AI marketing claims and emphasizes the need for transparency in the tech industry regarding product features and development timelines.

Apple has agreed to pay $250 million to settle a class-action lawsuit alleging misleading claims about Siri's advanced AI capabilities. The suit, covering roughly 36 million eligible iPhone users, contended that Apple promoted a 'personalized' version of Siri as 'available now' in late 2024 when it was not. Plaintiffs argued these exaggerations were used to boost iPhone sales. While Apple made no admission of wrongdoing, the settlement highlights increasing scrutiny over AI marketing and the need for transparency in tech advertising. This case may influence how companies communicate the readiness of their AI-powered products, balancing future potential with current functionality.

A new study by AI search analytics firm Peec AI reveals that artificial intelligence platforms are more likely to reference Nigel Farage than any other UK political leader when prompted about British politics. Experts suggest this indicates Reform UK's effective strategy for achieving high "LLM visibility" within large language models. This disproportionate digital prominence for Farage raises important questions about how AI influences political narratives and public perception, highlighting the critical need to understand algorithmic biases and the mechanisms of digital influence in the evolving AI-driven information landscape. The findings underscore the growing impact of AI on political discourse.