• The Legal Wire
  • Posts
  • This Week in Legal AI: Tougher Rules, Smarter Schools, Sharper Risks

This Week in Legal AI: Tougher Rules, Smarter Schools, Sharper Risks

Harvey takes AI to UK law schools, are your trainees already ahead of you?

 

Read time: under 5 minutes

Welcome to this week's edition of The Legal Wire!

California just raised the compliance bar: new youth-safety AI laws mean age checks, chatbot warnings, and real penalties, plus less room to blame “autonomous AI” when harm occurs. Meanwhile, a UNSW report logs 520+ cases of GenAI misuse (fake cites, bad summaries), underscoring why human review isn’t optional. State AGs teamed with OpenAI and Microsoft on a child-safety task force, signaling faster, coordinated scrutiny even as Congress stalls. On the talent front, Harvey is rolling its law-school program into Oxford, King’s, BPP, and The University of Law, future trainees will arrive AI-literate. And in-house is doubling down: GC AI raised $60M to scale workflow automation for legal teams.

Plus: our interview with AI.Law founder Troy Doucet on a newly issued U.S. patent that could reshape how long, structured legal documents are generated. It’s a provocative look at speed, scope, and the IP behind it. Make sure to read the full Q&A.

This week’s Highlights:

  • Industry News and Updates

  • AI.Law Secures Landmark Patent for AI-Generated Legal Documents

  • AI Regulation Updates

  • AI Tools to Supercharge your productivity

  • Legal prompt of the week

  • Latest AI Incidents & Legal Tech Map

Headlines from The Legal Industry You Shouldn't Miss

➡️ California’s new AI laws raise compliance stakes for tech platforms | California has passed several youth-focused AI and online-safety laws that force platforms and chatbot providers to issue warnings to minors, verify ages, and build self-harm response systems. These rules introduce steep penalties and expand potential liability for harmful AI outputs. Legal experts expect First Amendment and federal preemption challenges, especially over mandated warnings and age-verification duties. The laws also limit companies’ ability to argue that “autonomous AI” caused the harm, creating new exposure for chatbot makers. For lawyers, the takeaway is clear: tech clients now face a tougher, more fragmented regulatory landscape, and more states are likely to follow California’s lead.
Nov 17, 2025, Source: Bloomberg Law

➡️ Harvey brings its law school AI program to the UK | Harvey is expanding its law school program to the UK through new partnerships with Oxford, The University of Law, King’s College London, and BPP. For the legal sector, the move accelerates AI literacy in core legal training. Future lawyers will arrive in practice already trained on generative AI tools for research, drafting, and analysis, raising the bar for firms to modernize their own workflows and adopt compatible AI capabilities.
Nov 14, 2025, Source: Harvey

➡️ UNSW Report Finds 520+ Court Cases Involving GenAI Misuse | A new UNSW Law report has identified more than 520 legal cases since 2023 where generative AI was misused, most often through fake citations, incorrect summaries, and flawed legal reasoning. Most incidents involve self-represented litigants, especially in tribunals and lower courts, but lawyers are also implicated. For the legal profession, the findings highlight rising risks: courts are seeing more AI-generated errors, verification duties are tightening, and misuse is increasingly leading to judicial warnings or sanctions. The report stresses that GenAI can assist legal work, but only with proper safeguards and human review.
Nov 14, 2025, Source: Techxplore

➡️ State AGs, OpenAI, and Microsoft Launch AI Safety Task Force | North Carolina and Utah attorneys general have created a new AI Safety Task Force with OpenAI and Microsoft to establish basic safeguards for AI developers, focusing heavily on child protection and harmful outputs. With Congress stalled on AI regulation, this move positions state AGs as front-line AI enforcers, setting expectations that could shape liability and compliance for tech companies. The task force also allows states to coordinate investigations, meaning AI firms — and the lawyers who advise them, should expect closer, faster scrutiny of AI-related risks.
Nov 13, 2025, Source: CNN Business

➡️ GC AI Raises $60 Million to Expand Legal AI for In-House Teams | San Francisco-based GC AI has raised $60 million in Series B funding led by Scale Venture Partners and Northzone, valuing the company at $555 million. The platform, built specifically for corporate legal teams, helps over 1,000 companies, including News Corp, Skims, and Zscaler, automate contract review, compliance, and regulatory work. Founded by former Amazon and Replit lawyer Cecilia Ziniti, GC AI has grown its revenue from $1 million to over $10 million in under a year, cutting outside counsel costs and boosting internal efficiency. The new funding will accelerate product development and expand enterprise capabilities as in-house legal teams increasingly turn to AI to manage growing workloads and strategic demands.
Nov 12, 2025, Source: BusinessWire

Will this be the Next Big Thing in A.I?

Legal Technology

AI.Law Secures Landmark Patent for AI-Generated Legal Documents

When Troy Doucet founded AI.Law, his goal was simple yet ambitious: help litigation teams draft faster, smarter, and safer. Now, that vision has been codified into a U.S. patent. Issued under U.S. Patent No. 12,461,932 B1, AI.Law’s newly protected method covers how unstructured data is transformed into structured legal documents using a large language model and context-aware feedback loop, a process that enables the creation of lengthy, high-quality drafts without human intervention.

The patent’s scope extends beyond litigation support, and touches on the mechanics that major AI systems use to generate complex documents.  For Troy, the recognition underscores how far ahead AI.Law has been since its inception. As he puts it, “virtually all the big AI players now use our process.”

The Legal Wire recently interviewed Troy to discuss the patent’s implications, its impact on LegalTech, and what this milestone means for the future of AI-assisted litigation.

The AI Regulation Tracker offers a clickable global map that gives you instant snapshots of how each country is handling AI laws, along with the most recent policy developments.

The most recent developments from the past week:

📋 16 November 2025 | Egypt has taken serious steps toward AI governance framework, says foreign minister: It is reported that at the second edition of the AI, Data Centers and Cloud Conference and Exhibition (AIDC2 '25) in Cairo, Egypt's Foreign Minister Badr Abdelatty emphasized the nation's serious efforts to establish a comprehensive national framework for AI governance, including the creation of the National Council for Artificial Intelligence and the adoption of a phased national strategy to integrate AI into various sectors and strengthen digital infrastructure. Minister Abdelatty highlighted AI's potential to accelerate sustainable development and innovation, stressing Egypt's leading regional role in crafting unified Arab and African AI strategies, its active engagement in global AI governance discussions at the UN, and its ambition to reinforce its position as a regional technological innovation hub.

📋 13 November 2025 | MAS release guidelines for AI risk management: The Monetary Authority of Singapore (MAS) has issued new guidelines for AI risk management, emphasising the importance of financial institutions implementing robust frameworks to ensure the responsible use of AI technologies. These guidelines focus on key areas such as data governance, model validation, and accountability, aiming to enhance transparency and mitigate potential risks associated with AI applications in the financial sector. MAS underscores the necessity for institutions to establish clear policies and procedures to manage AI-related risks effectively, thereby fostering trust and confidence in AI-driven financial services.

📋 12 November 2025 | MAS and UK FCA announces partnership on AI-in-Finance: The Monetary Authority of Singapore (MAS) and the UK Financial Conduct Authority (FCA) have announced a partnership to collaborate on the development and implementation of AI in the financial sector. This collaboration aims to enhance the understanding and adoption of AI technologies, promote responsible innovation, and strengthen the resilience and efficiency of financial markets in both jurisdictions.

📋 12 November 2025 | Ministry of Science and ICT releases draft enforcement decree of AI Basic Act: The Ministry of Science and ICT has released the draft enforcement decree of the AI Basic Act, seeking public feedback for 40 days (by 22 December 2025) before finalising it ahead of the law's implementation on 22 January 2026. The decree clarifies regulatory frameworks, and establishes support systems for AI development. In particular, the draft clarifies: (1) standards for support projects to foster the AI industry; (2) designation of institutions to support national AI policy implementation; (3) that the "high-performance AI model," subject to safety requirements, is defined as exceeding 10^26 flops (based on cumulative computing power); (4) clear disclosure and labeling requirements for "deepfakes" or generative content to ensure user awareness; and (5) that AI used exclusively for national defense or security purposes is exempt from the law’s application. The decree also provides a grace period of over one year before imposing fines for violations during the initial implementation phase.

AI Tools that will supercharge your productivity

🆕 Edenreach - Transforming Justice Through Purpose-Driven Funding. Empowering investors and legal professionals to drive societal change while achieving financial returns.

🆕 Expert Radar - Expert Radar gives you an in-depth analysis of an expert’s litigation history, helping you identify conflicting testimony, potential biases, and questionable credentials.

🆕 Juro - Empower your team to agree and manage contracts end-to-end, with flexible AI automation that lives where you live.

Want more Legal AI Tools? Check out our
Top AI Tools for Legal Professionals

The weekly ChatGPT prompt that will boost your productivity

Why it helps: Turns scattered end-of-matter tasks into a single, reusable package, speeding client wrap-up, preserving know-how, and keeping your DMS and billing clean without hours of manual tidying.

Instructions:
Provide matter name, practice area, venue, and a few bullets on outcome, key filings, and dates. Ask for a ready-to-file closeout pack that includes:

- A one-page closing memo (issues, strategy, result, lessons learned)

- A documents index with final versions and locations (DMS links/placeholders)

- Client wrap-up email draft (what happened, next steps, retention/return of files)

- Knowledge-base entry (tags, reusable arguments/clauses, cites)

- Billing recap (phase summary + write-off flags)

- A checklist for archival/retention and conflict status updates

Collecting Data to make Artificial Intelligence Safer

The Responsible AI Collaborative is a not‑for‑profit organization working to present real‑world AI harms through its Artificial Intelligence Incident Database.

View the latest reported incidents below:

⚠️ 2025-11-08 | Alleged AI-Generated Deepfake of Western Australia Premier Roger Cook Used in YouTube Investment Scam | View Incident

⚠️ 2024-01-27 | AI-Generated Deepfake of Andrew Forrest Used to Promote Fraudulent 'Quantum AI' Crypto Platform on Facebook | View Incident

⚠️ 2025-07-25 | ChatGPT Allegedly Encouraged 23-Year-Old Texas User's Suicide During Extended Conversations | View Incident

The Legal Wire is an official media partner of:

Thank you so much for reading The Legal Wire newsletter!

If this email gets into your “Promotions” or "Spam” folder, move it to the primary folder so you do not miss out on the next Legal Wire :)

Did we miss something or do you have tips?

If you have any tips for us, just reply to this e-mail! We’d love any feedback or responses from our readers 😄 

Disclaimer

The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs ("Materials"), are accurate and complete.

Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations.

The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.

Reply

or to participate.