• The Legal Wire
  • Posts
  • Google’s AI Overviews Sued, Chatbots on a Leash, Creators at the Table

Google’s AI Overviews Sued, Chatbots on a Leash, Creators at the Table

Penske Sues Google’s AI, California Targets Chatbots, UK ‘Grips’ Copyright

Read time: under 4 minutes

Welcome to this week's edition of The Legal Wire!

California is on the cusp of a first-of-its-kind crackdown on AI companion chatbots: SB 243 would force platforms like OpenAI, Character.AI, and Replika to block self-harm and sexual chats with minors, add clear “you’re talking to AI” reminders, and publish transparency reports. If Governor Newsom signs, it lands in 2026, and sets a template other states will borrow.

Across the Atlantic, UK Culture Secretary Lisa Nandy vowed to finally “grip” the AI-copyright standoff. After a messy spring that let models train on creative works without clear consent or compensation, she promised a fix that protects artists without choking innovation.

In Washington, Senator Ted Cruz pitched an AI “sandbox” that would grant two-year waivers from some federal rules for firms that can show safety and risk controls, an attempt to speed U.S. competitiveness while regulators figure out the details.

And a new front just opened with publishers: Penske Media (Rolling Stone, Billboard, Variety) sued Google, alleging AI Overviews republish journalism without consent and drain traffic, tying visibility to usage and contributing to a one-third hit to affiliate revenue. Google says summaries improve discovery, but with ~90% U.S. search share, the fight over who gets paid for the answers is now headed to court.

Meanwhile, inside the legal department, the question isn’t only “can” we use AI, it’s “how do we price it?” Dorna Moini’s new piece, “Share the Dividend,” offers practical models for AI-enabled matters (what to disclose, how to set guardrails, and fee structures that reward efficiency).

This week’s Highlights:

  • Industry News and Updates

  • Share the Dividend: Pricing Models for AI‑Enabled Legal Work

  • AI Regulation Updates

  • AI Tools to Supercharge your producivity

  • Legal prompt of the week

  • Latest AI Incidents & Legal Tech Map

Headlines from The Legal Industry You Shouldn't Miss

➡️ Rolling Stone Owner Penske Sues Google Over AI Overviews | Penske Media, owner of Rolling Stone, Billboard and Variety, has sued Google, claiming its AI Overviews republish journalism without consent and siphon traffic from publishers. Filed in federal court in D.C., the case marks the first major U.S. publisher lawsuit against Google’s AI summaries. Penske alleges Google ties search visibility to use of its content, contributing to a one-third drop in affiliate revenue. Google, which controls nearly 90% of U.S. search, says AI Overviews improve user experience and broaden site discovery. The suit follows similar complaints from publishers like Chegg and growing scrutiny of Google’s market dominance.
Sep 13, 2025, Source: Reuters

➡️ California Moves to Regulate AI Chatbots | California’s Assembly has passed SB 243, a bill to regulate AI companion chatbots and protect minors, sending it to the Senate for a final vote. If signed by Governor Gavin Newsom, the law would take effect in 2026, requiring platforms like OpenAI, Character.AI, and Replika to block chats on self-harm or sexual content, issue reminders that users are speaking with AI, and file transparency reports. The measure follows rising concerns after a teen suicide linked to ChatGPT and reports of Meta bots engaging in inappropriate conversations.
Sep 11, 2025, Source: TechCrunch

➡️ UK Culture Secretary Vows Action on AI and Copyright | Culture Secretary Lisa Nandy pledged that the government will not delay addressing the clash between AI and copyright, admitting ministers mishandled the issue. Speaking before the Commons Culture, Media and Sport Committee, she said legislation had allowed the debate to become “binary,” pitting AI against creative industries. The Data (Use and Access) Act, passed earlier this year, was amended after peers raised concerns about AI models training on copyrighted works without permission or payment. Nandy promised a solution that protects creatives while supporting innovation, saying: “We are going to grip this issue and find a solution.”
Sep 10, 2025, Source: Yahoo Finance

➡️ Sen. Cruz Proposes AI ‘Sandbox’ to Ease Regulations | Jody Godoy (Reuters) Reports: Republican Sen. Ted Cruz has introduced a bill to create an AI “sandbox,” allowing companies to apply for temporary exemptions from federal regulations. The proposal, aimed at boosting U.S. competitiveness with China, would grant two-year waivers for experimentation, provided companies outline safety and financial risk mitigation. Cruz, who chairs the Senate Commerce Committee, said the bill would “embrace entrepreneurial spirit” while balancing consumer protections. Congress will now consider whether to advance the measure.
Sep 10, 2025, Source: Reuters

Article by: Dorna Moini

Share the Dividend: Pricing Models for AI‑Enabled Legal Work

Clients are asking a fair question: If AI speeds up drafting and review, how does that change the way you staff and price matters?

I’m the CEO of Gavel and a former practicing attorney at Sidley Austin. This question has been relevant since before the generative AI boom in legal. I still remember clients “forcing” us to use Relativity for document review in 2013. The firms that answer the pricing question clearly don’t just avoid friction; they win work. 

This article offers a practical guide for transactional practices based on my experience talking to our law firm customers at Gavel, particularly those using our AI contract analysis and redlining tool. It covers what to disclose, how to structure quality controls, and four pricing models that reward efficiency. It also shows how to measure your baseline so fees reflect value, not just elapsed time.

The AI Regulation Tracker offers a clickable global map that gives you instant snapshots of how each country is handling AI laws, along with the most recent policy developments.

The most recent developments from the past week:

📋 15 September 2025 | New AI safety governance framework unveiled: It is reported that the 2.0 version of the Artificial Intelligence Safety Governance Framework was unveiled at the 2025 Cybersecurity Week in Kunming, Yunnan province. This updated framework, developed collaboratively by the National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT/CC) and various AI institutions, builds upon its 2024 predecessor by incorporating advancements in AI technology and refining risk classifications and preventive measures. An official from CNCERT/CC emphasized that the new version aligns with global AI development trends, promoting a secure and trustworthy AI ecosystem through a collaborative governance model that spans borders, fields, and industries. The release also aims to enhance international cooperation in AI safety governance and encourage the inclusive sharing of technological achievements worldwide. The text of the updated framework has yet to be published.

📋 14 September 2025 | California AI Safety Bill passes legislature, awaiting governor approval: California’s state legislature has passed a landmark AI Safety Bill, SB 53, which sets new transparency requirements for large AI companies and mandates disclosure and certification of safety testing methods. If signed by Governor Gavin Newsom, the bill will require companies with over $500 million in annual revenue to publicly disclose safety protocols, submit to independent audits starting in 2030, protect whistleblowers, and promptly report critical AI incidents, with civil penalties of up to $1 million per violation for non-compliance. The bill is backed by companies like Anthropic but faces opposition from major Silicon Valley players, venture capital firms, and lobbying groups, some of whom argue for federal rather than state-level regulation. The legislation aims to balance innovation with public safety and could set a national precedent, although its fate now rests with Governor Newsom, who has previously vetoed broader AI safety proposals over concerns about hampering technical progress.

📋 12 September 2025 | German government starts consultation on law to implement EU rules: The German Federal Ministry for Digital and State Modernization has initiated consultations with states and associations on the draft "AI Market Surveillance and Innovation Promotion Act" (KI-MIG), which aims to implement the EU AI Act across all sectors. The draft designates the Federal Network Agency as the central supervisory authority, establishes a Coordination and Competence Center for AI (KoKIVO), and introduces measures such as an AI service desk and a real-world AI laboratory to support innovation. The EU AI Act, effective since 2 August 2024, mandates that national supervisory structures be established by 2 August 2025; however, due to early federal elections, this deadline was missed, prompting the government to expedite the legislative process.

AI Tools that will supercharge your productivity

🆕 AfriWise - Break free of the chaos of finding African laws and regulations, and simplify compliance

🆕 Diligen - Identify key provisions, create contract summaries and improve team collaboration. All from one simple, secure interface.

🆕 Lucio - Enterprise AI solutions for document review, drafting, and more.

Want more Legal AI Tools? Check out our
Top AI Tools for Legal Professionals

The weekly ChatGPT prompt that will boost your productivity

This prompt instantly organizes the entire close, so nothing slips, responsibilities are clear, and the team executes on a single, living checklist instead of scattered emails.

Instructions:
Provide: deal type (M&A / financing / licensing / real estate), governing law, parties, target close date, and any special conditions (regulatory approvals, third-party consents, financing). Ask for an output that:

1. Builds a closing checklist table (item, owner, dependencies, required doc/form, signature/notary needs, due date).

2. Lists regulatory/filing items (e.g., HSR, UCC-1, SEC/Companies House, consents) with lead times.

3. Generates a funds-flow template and deliverables list (certificates, opinions, bring-downs, schedules).

4. Flags critical path risks and proposes mitigation steps.

5. Produces a week-by-week timeline to closing with check-ins and contingencies.

Collecting Data to make Artificial Intelligence Safer

The Responsible AI Collaborative is a not‑for‑profit organization working to present real‑world AI harms through its Artificial Intelligence Incident Database.

View the latest reported incidents below:

⚠️ 2025-09-07 | Russian Disinformation Campaign Reportedly Used AI-Generated Posts and Videos to Target 2025 Moldovan Parliamentary Elections | View Incident

⚠️ 2025-08-28 | Meta AI on Instagram Reportedly Facilitated Suicide and Eating Disorder Roleplay with Teen Accounts | View Incident

⚠️ 2025-01-28 | Purportedly AI-Generated Deepfake Image Reportedly Falsely Links Canadian Prime Minister Mark Carney to Jeffrey Epstein | View Incident

The Legal Wire is an official media partner of:

Thank you so much for reading The Legal Wire newsletter!

If this email gets into your “Promotions” or "Spam” folder, move it to the primary folder so you do not miss out on the next Legal Wire :)

Did we miss something or do you have tips?

If you have any tips for us, just reply to this e-mail! We’d love any feedback or responses from our readers 😄 

Disclaimer

The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs ("Materials"), are accurate and complete.

Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations.

The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.

Reply

or to participate.