Read time: under 10 minutes

Welcome to this week's edition of The Legal Wire!

The past week drew a cleaner map of who’s steering AI right now. In the U.S., states are pressing ahead with their own guardrails as the White House warns against a fragmented “patchwork,” setting up a direct clash over who gets to set the rules. In Europe, policymakers chose a two-speed approach: more time to comply with the AI Act’s high-risk obligations, but less patience for obvious harms, with momentum building behind an outright ban on “nudify” tools.

Meanwhile, the infrastructure layer kept tightening. South Korea put real money behind its “K-Nvidia” ambitions by backing Rebellions, as governments race to secure chips and capacity. In Washington, the fight over AI access also moved into court, with a judge blocking the attempt to blacklist Anthropic from federal work and forcing the administration to justify its national-security theory on appeal.

Our feature this week looks at what happens when litigation strategy gets rebuilt with math and automation: SettleIndex’s new AI-driven settlement modeling, and what it signals about the next phase of decision-support tools in disputes.

This week’s Highlights:

  • Industry News and Updates

  • SettleIndex Has Levelled Up: Automated Settlement Modeling, Powered by AI

  • The Double-Edged Sword of Agentic AI – Will Autonomous Workflows Break the Billable Hour?

  • AI Regulation Updates

  • AI Tools to Supercharge your productivity

  • Legal prompt of the week

  • Latest AI Incidents & Legal Tech Map

Headlines from The Legal Industry You Shouldn't Miss

➡️ States Press Ahead on AI Rules, Setting Up a Showdown With Trump | A state–federal clash over AI regulation is intensifying. The White House has warned that a “patchwork” of state laws could undermine U.S. competitiveness and is pushing a federal framework that would curb or pre-empt state action. But California Governor Gavin Newsom aims to impose safety and privacy guardrails on AI companies contracting with the state and is pledging to defend California’s existing AI protections. The dispute unfolds as states continue advancing dozens of new AI bills covering issues like child safety, transparency, and security, while Congress remains largely stalled, leaving governance to be fought state by state.
Mar 30, 2026, Source: New York Times

➡️ EU Delays AI Act Deadlines, but Fast-Tracks a Ban on “Nudify” Tools | The European Parliament has pushed back key EU AI Act compliance timelines, giving developers of high-risk AI systems until end 2027 to meet the Act’s requirements, and extending sector-linked obligations to August 2028. The shift is part of a broader push in Brussels to make implementation more workable and keep the EU competitive while standards and enforcement capacity catch up. Lawmakers also signaled they won’t slow down on clear-cut harms: Parliament backed an outright ban on “nudify” apps that generate non-consensual intimate images. Baseline transparency measures, including requirements tied to labeling synthetic content, are expected to stay on track as negotiations with member states continue.
Mar 29, 2026, Source: Competition Policy International

➡️ South Korea Backs Rebellions With $166M to Build a “K-Nvidia” | Reported by Reuters South Korea has approved a 250 billion won ($166 million) investment in AI chip startup Rebellions through its state-led National Growth Fund, signaling a stronger push to cultivate a homegrown advanced semiconductor player. Rebellions designs neural processing units for AI workloads, and the funding is slated to support mass production of its chips and development of next-generation AI semiconductors. The investment is the first direct deal under Seoul’s “K-Nvidia” initiative, as the government looks to strengthen its role in the AI supply chain and reduce reliance on foreign chipmakers amid surging demand for high-performance AI computing.
Mar 26, 2026, Source: Thomson Reuters

➡️ Judge Blocks Anthropic “Supply-Chain Risk” Blacklist, Pauses Order for Appeal | A federal judge in Northern California has blocked the Trump administration and Pentagon from branding Anthropic a national-security “supply-chain risk” and cutting it off from federal work, finding the move likely unlawful and lacking a legitimate factual basis. The court also criticized the lack of notice or due process before the public blacklist, and said Anthropic’s push for usage restrictions doesn’t justify treating it as a potential saboteur. The injunction restores the status quo for now, but is paused for a week to allow the administration to appeal.
Mar 26, 2026, Source: NBC News

➡️ Harvey Raises $200M at $11B to Scale Legal AI Agents | Harvey has secured $200 million in new funding at an $11 billion valuation, co-led by GIC and Sequoia, to expand the AI “agents” firms and in-house teams run on its platform and to grow its embedded legal engineering teams. The company noted that 25,000+ custom agents are already operating across workflows like M&A, due diligence, contract drafting, and review, and says adoption now spans a majority of the AmLaw 100 plus hundreds of in-house teams globally.
Mar 25, 2026, Source: Harvey

Will this be the Next Big Thing in A.I?

Legal Technology

SettleIndex Has Levelled Up: Automated Settlement Modeling, Powered by AI

We’ve written about SettleIndex before at The Legal Wire, but a new update has us taking a fresh look. The platform, known for applying mathematical rigor to litigation risk, has recently introduced AI-powered automation that promises to make modeling faster, smarter, and more accessible to lawyers. CEO Robert Hogarth shared a sample report with us that shows how their latest evolution is reshaping litigation strategy in real-world cases.

Custom Models to Click-and-Go Risk Analysis

SettleIndex has moved beyond manual inputs to a streamlined, AI-enhanced workflow. Instead of building models from scratch, users now upload claim documents: letters or pleadings, and the system handles the rest. Within a minute (yes, a minute!), SettleIndex extracts the amounts claimed and the key legal issues. It then generates a dual-perspective litigation risk model using decision tree theory.

We recently spoke with Robert to dig a little deeper into what’s new at SettleIndex.

TLW: The new dual-party modeling shows both claimant and defendant positions, what is the benefit of this for clients such as law firms and insurers?

Robert: “One of the problems in litigation risk modelling is that it takes two to tango. It doesn’t help one party to create a scientific risk model, if they are negotiating with an opponent who takes an irrational view of their chances. A brilliant aspect of SettleIndex is that it produces a neutral valuation of the dispute, which both parties can share. The parties can adjust any parameters, but they will be starting from a fair settlement figure. This has the power to change behaviour, which is necessary for clients to see the benefits.”

Will this be the Next Big Thing in A.I?

The Double-Edged Sword of Agentic AI – Will Autonomous Workflows Break the Billable Hour?

The legal technology landscape in 2026 has officially moved past the novelty of generative chatbots. Today, the focus is decisively on Agentic AI — proactive, autonomous software programs designed to execute multi-step legal workflows with minimal human intervention. Agentic systems build on generative AI by giving models access to tools and greater autonomy to act in the digital and sometimes physical world. From natively integrated copilots like Spellbook living within Microsoft Word to custom-built infrastructure platforms modeled on Harvey, AI is no longer just summarizing documents. It is strategizing litigation, organizing massive data rooms, and generating complete contracts from scratch in a fraction of traditional timeframes.

But as these tools evolve from reactive assistants to proactive colleagues — increasingly described as a “new digital workforce” or “digital associate” — they force an unavoidable debate among legal practitioners: is the legal fraternity ready for the disruption of its foundational economic model?

FutureLaw 2026 is one of Europe’s most credible and fastest‑growing legal‑innovation conferences — the clear #1 in Northern Europe and a top‑5 global event by thematic depth and institutional relevance. Its focused scale, high‑level regulatory access, and Estonia’s digital‑state context make it uniquely influential for legal‑tech and digital‑transformation community.

On 14–15 May 2026, FutureLaw brings together 500+ leaders from law firms, corporate legal departments, legal‑tech companies, academia, and public institutions. The program spans AI in legal practice, legal policy, digital governance, legal design, ethics, platformization, regulatory innovation, and the future of legal operations — all highly relevant to the EU market and beyond.

We invite the Legal Wire community to join us in Tallinn — a rare opportunity to engage directly with EU‑level policymakers, global innovators, and digital‑state architects of the world.

Use the exclusive partner code LWIRE to receive 20% off your ticket.

Featured speakers include:
• Charles Pare — Senior Advisor to the Board & Executive Committee – confidential Holding 
• Christina Blacklaws — Former President, Law Society of England & Wales
• Pēteris Zilgalvis — Judge, General Court of the EU
• Damien Riehl — Solutions Champion, Clio
• Paul Nemitz — Principal Advisor, EU Commission (ret.) | "Godfather of the GDPR"

Explore the full program and secure your discounted ticket:
👉 https://futurelaw.ee

The AI Regulation Tracker offers a clickable global map that gives you instant snapshots of how each country is handling AI laws, along with the most recent policy developments.

The most recent developments from the past week:

📋 29 March 2026 | ICT Ministry approves a comprehensive three-year plan to establish South Korea as top 3 AI powerhouse: The Ministry of Science and ICT approved a three-year national road map to establish South Korea as a top three global leader in AI by 2028, focusing on the commercialization of 6G technology and enhancing AI capabilities across various industries. The plan includes four main pillars: expanding digital infrastructure, strengthening digital capabilities, advancing intelligent informatization, and fostering an inclusive digital environment. Key initiatives involve upgrading 5G networks, accelerating 6G development for a 2030 rollout, enhancing cybersecurity, and building a national data platform.

📋 27 March 2026 | Top AI conference reverses ban on papers from US-sanctioned entities after Chinese boycott: The Conference on Neural Information Processing Systems (NeurIPS) has reversed a policy banning papers from researchers at entities under US sanctions, following a boycott from the China Association for Science and Technology (CAST). Initially, NeurIPS announced the policy to comply with U.S. law, expanding previous restrictions that only targeted entities on the U.S. Treasury's Specially Designated Nationals List. The announcement caused significant backlash in China, prompting CAST to halt funding applications for members wishing to attend NeurIPS and redirect them to other conferences. NeurIPS later said the restrictions were a miscommunication error and that the updated policy would only restrict submissions from those on the SDN list.

📋 24 March 2026 | US Department of Labor launches ‘Make America AI-Ready’ initiative: The U.S. Department of Labor has launched the 'Make America AI-Ready' initiative, offering a free artificial intelligence literacy course accessible by texting 'READY' to 20202. This program aims to equip American workers with foundational AI skills through daily, text-based lessons and challenges, supporting the Trump Administration's commitment to preparing the workforce for an AI-driven economy as outlined in America's Talent Strategy and the White House's AI Action Plan.

📋 24 March 2026 | 'More time' would help states investigating AI mergers, says new NAAG task force chair: It is reported that the National Association of Attorneys General (NAAG) Antitrust Task Force has appointed Marie Martin from the Utah Office of the Attorney General as vice chair, a newly created role reflecting the task force's increased workload in recent years. This appointment underscores the task force's commitment to enhancing its capacity to address complex antitrust issues, including those arising from AI mergers. The task force continues to advocate for legislative clarity on algorithmic collusion and emphasizes the need for sufficient time to thoroughly investigate AI-related mergers to ensure compliance with antitrust laws.

AI Tools that will supercharge your productivity

🆕 Legora - Legora streamlines everything from research to drafting and review — helping lawyers spend less time managing process, and more time delivering value.

🆕 Centari - Precision-driven deal intelligence. Empowering industry-leading dealmakers to achieve superior outcomes with data and AI.

🆕 Definely - Designed for control. Specialist drafting and review tools for lawyers working on complex contracts in Word

Want more Legal AI Tools? Check out our
Top AI Tools for Legal Professionals

The weekly ChatGPT prompt that will boost your productivity

Why it helps: Gives your team a clear, low-risk roadmap to adopt AI.

Prompt:
We want to adopt AI in a [solo practice / law firm / in-house team]. Based on the details below, create a practical adoption playbook that we can implement immediately.

Inputs:

- Team size + roles: [ ]
- Practice areas: [ ]
- Current tools (email/DMS/PMS/billing): [ ]
- Top 5 pain points (time sinks): [ ]
- Risk tolerance: [low/medium/high]
- Jurisdiction(s): [ ]
- Timeline: [30/60/90 days]

Output:

- The top 7 AI use cases to roll out first (ranked by time saved vs risk).
- Tooling and data rules (what’s allowed, what’s prohibited, redaction, retention).
- A simple human-review checklist for anything sent/filed.
- A 30/60/90-day rollout plan with owners and quick wins.
- KPIs to track (time saved, quality, adoption, errors).
- A short staff training script (how to prompt, what not to do).

Collecting Data to make Artificial Intelligence Safer

The Responsible AI Collaborative is a not‑for‑profit organization working to present real‑world AI harms through its Artificial Intelligence Incident Database.

View the latest reported incidents below:

⚠️ 2026-03-19 | Purported Deepfake Applicant Reportedly Impersonated Tokyo IT Executive Kenbun Yoshii During Online Job Interview | View Incident

⚠️ 2026-02-27 | Meta AI Smart Glasses Reportedly Exposed Intimate User Imagery and Video to Human Reviewers in Kenya | View Incident

⚠️ 2025-07-14 | Purported Facial Recognition Error Reportedly Led to Arrest and Jailing of Tennessee Woman in North Dakota Fraud Case | View Incident

The Legal Wire is an official media partner of:

Thank you so much for reading The Legal Wire newsletter!

If this email gets into your “Promotions” or "Spam” folder, move it to the primary folder so you do not miss out on the next Legal Wire :)

Did we miss something or do you have tips?

If you have any tips for us, just reply to this e-mail! We’d love any feedback or responses from our readers 😄

Disclaimer

The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs ("Materials"), are accurate and complete.

Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations.

The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.

Reply

Avatar

or to participate

Keep Reading