• The Legal Wire
  • Posts
  • Sanctions Spike, Disney Bets on OpenAI, Lexis Drops New AI

Sanctions Spike, Disney Bets on OpenAI, Lexis Drops New AI

Ethics Heat Up, Disney Cashes In, Spellbook & Lexis Set the Pace

Read time: under 4 minutes

Welcome to this week's edition of The Legal Wire!

Courts are losing patience with sloppy AI. Sanctions over hallucinated citations are driving a surge in mandatory ethics training across bars and firms, putting “human-in-the-loop” from nice-to-have to non-negotiable. Meanwhile, Disney is playing both sides of the AI chessboard, licensing its characters to OpenAI while firing off C&Ds at unlicensed model training. On the tools front, LexisNexis rolled out the next generation of Protégé General AI, and Gavel doubled down on rules-based automation with new Workflows. And if you want a closer look at where contract work is headed, our feature on Spellbook shows why Word-native copilots are fast becoming a favourite.

This week’s Highlights:

  • Industry News and Updates

  • Spellbook: The AI-Powered Contract Copilot Taking Legal Work Seriously (So You Don’t Have To)

  • AI Regulation Updates

  • AI Tools to Supercharge your productivity

  • Legal prompt of the week

  • Latest AI Incidents & Legal Tech Map

Headlines from The Legal Industry You Shouldn't Miss

➡️ AI Missteps in Court Drive Surge in Legal Ethics and Training Programs | Reported by Bloomberg Law: A growing wave of lawyers sanctioned for submitting AI-hallucinated citations is fueling rapid growth in AI-focused legal education. State bars, courts, and law firms are expanding mandatory training to stress human oversight, ethical responsibility, and verification, as misuse of generative AI in filings surged sharply throughout 2025.
Dec 15, 2025, Source: Bloomberg Law

➡️ Disney Embraces AI With OpenAI Deal, While Cracking Down on Unpaid Use of Its Content | Disney has announced a $1 billion partnership with OpenAI to license its iconic characters for AI-generated video, while simultaneously sending cease-and-desist letters to companies it says are training AI on Disney content without permission. The move underscores Disney’s dual strategy: aggressively expanding its own AI ambitions while tightly policing the use of its intellectual property by others.
Dec 11, 2025, Source: Inside The Magic

➡️ LexisNexis Unveils Next-Generation Protégé General AI for Unified Legal Workflows | LexisNexis has launched the next generation of Protégé General AI, bringing authoritative legal content, customer documents, and open web insights into a single secure AI workflow backed by Shepard’s® Citations. The update strengthens LexisNexis’ push toward agentic, model-flexible legal AI designed to support drafting, research, and complex problem-solving in one integrated environment.
Dec 11, 2025, Source: The Legal Wire

➡️ Gavel Launches Workflows, Reaffirming the Case for Rules-Based Legal Automation | Gavel has introduced Gavel Workflows, expanding its document automation platform with advanced logic, calculations, and end-to-end workflows for high-volume legal work. The move signals a deliberate bet on rules-based automation alongside its growing AI product, Gavel Exec, arguing that structured templates remain faster, safer, and more reliable than generative AI for many legal documents.
Dec 9, 2025, Source: The Legal Wire

Will this be the Next Big Thing in A.I?

Legal Technology

Spellbook: The AI-Powered Contract Copilot Taking Legal Work Seriously (So You Don’t Have To)

If you were building the ideal AI tool for transactional lawyers from scratch, you’d want it to exist where lawyers conduct their day-to-day work, you’d want it to understand their language, and to help (not hover) during every contract draft and negotiation. That’s the premise behind Spellbook, the legal AI copilot that operates natively inside Microsoft Word. It’s built for transactional lawyering, not just document summarizing, and it’s fast becoming the platform of choice for teams that want to draft smarter, redline faster, and negotiate like they’ve got a dozen extra hands.

Spellbook’s approach is grounded in what legal work actually looks like, minute by minute: toggling between redlines, referencing precedents, responding to client emails, checking clauses against market norms, and mentally tracking a dozen risk vectors. Their answer? A one-stop AI that not only sees what you see but anticipates your next move, without breaking your focus or your formatting.

This week, The Legal Wire interviewed Co-founder Scott Stevenson to unpack how a brand new AI copilot is reshaping contract work from inside lawyers’ safe spade: Microsoft Word.

The AI Regulation Tracker offers a clickable global map that gives you instant snapshots of how each country is handling AI laws, along with the most recent policy developments.

The most recent developments from the past week:

📋 15 December 2025 China and Saudi Arabia pledge deeper hi-tech cooperation: China and Saudi Arabia have committed to deepening hi-tech cooperation, focusing on sectors such as new energy and AI, following a state visit by China Foreign Minister Wang Yi to Riyadh. Saudi Arabia supports China in hosting the second China-Arab States Summit and the second China-Gulf Arab Cooperation Council Summit in 2026 and is willing to promote the early completion of negotiations on the China-Gulf Free Trade Agreement. Further details have yet to be revealed.

📋 11 December 2025 President Trump signs executive order blocking states from regulating AI: US President Donald Trump has issued an executive order to establish a unified national policy framework for AI, aiming to eliminate state-level regulations that hinder innovation and create compliance challenges for AI companies. The order directs the creation of an AI Litigation Task Force to challenge state laws deemed unconstitutional or obstructive, and requires the Commerce Department to evaluate state AI laws, identifying those that compel false outputs or violate constitutional rights. States with onerous AI laws risk losing federal funding under programs like BEAD, while agencies may condition grants on states refraining from enforcing such laws. The FCC is tasked with considering a federal reporting and disclosure standard, and the FTC must issue guidance clarifying that state laws mandating deceptive AI outputs are preempted. Finally, the order calls for legislative recommendations to establish a uniform federal AI framework, while preserving state authority in areas such as child safety, infrastructure, and procurement.

📋 11 December 2025 | Japanese Government plans to double staff tasked with checking AI Safety: It is reported that the government plans to double the number of staff at a government-affiliated body that works to confirm the safety of AI technology, according to a draft of its basic plan on AI. The plan, which is set to be approved by the Cabinet within this month, will be the first of its kind that the government has compiled. It states that the number of staff at the AI Safety Institute, a government-affiliated institution established in 2024, should be “immediately” expanded to about double its current level, which is roughly 30. The institution is expected to engage in activities such as developing a system to evaluate the safety of AI.

AI Tools that will supercharge your productivity

🆕 DraftPilot - Redline contracts accurately in minutes, The easiest-to-use AI redline solution, plug-and-play right in Microsoft Word

🆕 Legalito - Smart Enquiries: Bringing Clarity to a dark Corner of Conveyancing

🆕 Wordsmith AI - Helps legal instantly service other teams with contract reviews, template drafts and policy guidance when and where they need it.

Want more Legal AI Tools? Check out our
Top AI Tools for Legal Professionals

The weekly Private LLM prompt that will boost your productivity

Why it helps: Turns a messy inbox into a clean fact chronology you can cite and act on fast.

Instructions:
Paste a bundle of case-related emails (or an export). Return a timeline with:

- Date/Time • Sender → Recipient(s)

- Event summary (1–2 lines)

- Attachments referenced

- Issue tag (e.g., notice, breach, damages)

- Gaps/next pulls (missing docs, follow-ups)

Collecting Data to make Artificial Intelligence Safer

The Responsible AI Collaborative is a not‑for‑profit organization working to present real‑world AI harms through its Artificial Intelligence Incident Database.

View the latest reported incidents below:

⚠️ 2025-02-05 | Attacker Reportedly Bypasses AI Safety Filters to Obtain Guidance for Non-Fatal Hammer Assault in Denmark | View Incident

⚠️ 2025-12-05 | The New York Times Sued Perplexity for Allegedly Using Copyrighted Content and Generating False Attributions | View Incident

⚠️ 2025-12-06 | Purported Deepfake Impersonating Cyprus President Nikos Christodoulides Reportedly Defrauded Citizens of Thousands of Euros | View Incident

The Legal Wire is an official media partner of:

Thank you so much for reading The Legal Wire newsletter!

If this email gets into your “Promotions” or "Spam” folder, move it to the primary folder so you do not miss out on the next Legal Wire :)

Did we miss something or do you have tips?

If you have any tips for us, just reply to this e-mail! We’d love any feedback or responses from our readers 😄 

Disclaimer

The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs ("Materials"), are accurate and complete.

Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations.

The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.

Reply

or to participate.