
Read time: under 11 minutes
Welcome to this week's edition of The Legal Wire!
This week, the legal AI story split three ways: who gets to regulate it, who gets to run it, and who gets hurt when it fails. Colorado’s new AI Act is already in court after xAI moved to block it, while Brussels is weighing whether ChatGPT now crosses the line into “very large platform” territory under the Digital Services Act. In Washington, the mood turned more operational: U.S. officials privately warned bank CEOs about Anthropic’s latest security-grade model and the cyber risk of putting vulnerability-finding AI inside sensitive systems. And the enforcement drumbeat kept moving as the DOJ touted its first conviction under the Take It Down Act, while a D.C. appeals court refused (for now) to pause the Pentagon’s Anthropic blacklist.
Our feature this week flips the lens from tools inside firms to firms built around tools: Conrad Everhard on Flatiron Law Group, and what changes when legal tech isn’t adopted: it’s the operating model.
On the customer story side: Nilay Choksi, General Counsel at Worksport, says Ruli AI is saving him 25–30 hours a month on contract review and its Monitor tool has even helped him spot when internal policies need updating.
This week’s Highlights:
Customer story feature: Solo GC. Maximum Impact. How Nilay Choksi reclaims 30 hours a month with Ruli.
Industry News and Updates
When a Law Firm Is Built Around Legal Tech
Use our NEW AI Regulation Updates feature
AI Tools to Supercharge your productivity
Legal prompt of the week
Latest AI Incidents & Legal Tech Map

30 Hours Back. Every Month. How One Solo GC Is Doing More with Ruli AI
Nilay Choksi, General Counsel at Worksport, evaluated over a dozen legal AI tools before choosing Ruli. The interface, the AI agent, the Word extension — all checked boxes. But it was the pre-sales customer service that closed the deal. Since signing on, he's saving 25 to 30 hours a month on contract review, and discovered use cases he never anticipated — like using Ruli's Monitor feature to flag when internal company policies need updating. For a solo GC with a growing workload, that's not just a time-saver. That's a career-changer.


Headlines from The Legal Industry You Shouldn't Miss
➡️ xAI Sues to Block Colorado’s AI Act | Musk’s xAI is suing Colorado to stop enforcement of the state’s new AI Act, arguing it violates the First Amendment by forcing companies to make disclosures tied to the law’s anti-discrimination framing. Colorado’s law, effective Feb. 1, requires notice when AI plays a substantial role in “consequential decisions” (like jobs, housing, healthcare, and finance) and creates liability for AI-driven discrimination. State sponsors say it’s basic transparency and accountability, while the governor has previously pushed for tweaks to make the regime less burdensome.
Apr 13, 2026, Source: Competition Policy International
➡️ Banks Warned About Anthropic’s New Cybersecurity-Grade Model | U.S. officials privately cautioned leaders of major banks that Anthropic’s new “Claude Mythos Preview” could heighten cyber risk if deployed inside bank systems, because its vulnerability-finding capabilities might be exploited by bad actors. Treasury Secretary Scott Bessent convened the briefing with Fed Chair Jerome Powell; Anthropic says the model won’t be released publicly and will be limited to a 40-company “Project Glasswing” group that includes JPMorgan.
Apr 10, 2026, Source: The New York Times
➡️ ChatGPT Could Face Tougher EU Rules Under the DSA | Reported by Reuters The European Commission is assessing whether OpenAI’s ChatGPT should be classified as a “very large online platform” under the Digital Services Act after OpenAI published EU user numbers above the 45 million monthly threshold. OpenAI said its reporting shows ChatGPT Search averaged about 120.4 million monthly users in the EU over the six months ending September 2025, which could trigger stricter compliance duties if the designation is confirmed.
Apr 10, 2026, Source: Thomson Reuters
➡️ Ohio Man First to be Convicted Under Take It Down Act, DOJ Says | The Justice Department says an Ohio man is the first person convicted under the federal Take It Down Act, signed in May 2025, which criminalizes posting nonconsensual explicit imagery, including AI deepfakes. Prosecutors said he used AI to create and distribute nonconsensual sexual images involving adults and minors and pleaded guilty to cyberstalking and related charges. The law also requires platforms to remove reported material within 48 hours.
Apr 8, 2026, Source: NBC News
➡️ US Appeals Court Won’t Pause Pentagon’s Anthropic Blacklist Yet | Reported by Reuters A D.C. appeals court declined to pause the Pentagon’s supply-chain risk designation of Anthropic, keeping it blocked from Defense contracts as the case proceeds. Anthropic calls it retaliation over AI safety guardrails; the Justice Department says it’s about contract and operational reliability. The decision isn’t final and contrasts with a separate California ruling that blocked another Pentagon order.
Apr 8, 2026, Source: Thomson Reuters

Legal Technology
When a Law Firm Is Built Around Legal Tech
Most conversations about legal technology start from the same premise: law firms exist, and technology is introduced into them.
The question is usually how well those tools are adopted, how much efficiency they create, and whether they meaningfully change how lawyers work. However, a smaller, but increasingly relevant, question sits just behind it: what happens when a firm is built around the technology from the outset?
That is the perspective Conrad Everhard brings. As a founding partner of Flatiron Law Group LLP, he is not approaching legal tech as a buyer or evaluator of tools, but as someone designing a law firm where technology is part of the operating model itself.
We spoke with Conrad to understand how that thinking translates into practice, and what it suggests about the direction of the legal services market.
Looking at legal tech from the inside out
Flatiron is, on paper, a law firm. It focuses on high-stakes M&A and complex transactional work, led by partners with backgrounds in large firms. But the way Flatiron is described leads you to think less about what it actually does, and more about how it is structured to do it.
Rather than selecting tools to support existing workflows, Flatiron’s systems sit at the centre of how work is executed. Flatiron built an AI infused deal operating system called Deal Driver which manages all of the firm’s transactions from end to end (Deal Driver is used by Flatiron internally but may be separately commercialized in the future). The founders of Flatiron are also founders of a separate company called Deal Mentor, which has a built an AI training platform that simulates live negotiations in complex transactions and can be deployed for any learning purpose. Deal Driver and Deal Mentor provide the foundational architecture for Flatiron’s practice model. They are not layered onto practice but embedded within it.



AI Tools that will supercharge your productivity
🆕 Ruli AI - Continuous legal intelligence. Your legal team has a memory, Ruli helps it speak.
🆕 Altumatim - The first eDiscovery review that thinks for itself. Autonomous Review™ deploys multi-agent AI teams to streamline responsiveness, privilege, confidentiality redactions, and production, all in one workflow.
🆕 Solve Intelligence - AI patent drafting workflows purpose-built for law firms & corporate IP teams. Draft patents, respond to actions and search prior art in one place.
Want more Legal AI Tools? Check out our
Top AI Tools for Legal Professionals


The weekly ChatGPT prompt that will boost your productivity
Why it helps: Helps you catch errors and hidden risks before anything gets sent or filed, saving rework time and protecting accuracy, confidentiality, and credibility.
Prompt: I’m about to rely on this AI-generated legal work product: [paste text] for [jurisdiction] and [use: client email / memo / filing / contract]. Identify what must be verified before use. Return a short checklist covering: citations/authorities, factual assumptions, missing issues, jurisdiction-specific rules, confidentiality/privilege risks, and any red flags that require lawyer review.

Collecting Data to make Artificial Intelligence Safer
The Responsible AI Collaborative is a not‑for‑profit organization working to present real‑world AI harms through its Artificial Intelligence Incident Database.
View the latest reported incidents below:
⚠️ 2026-03-13 | Sixth Circuit Sanctioned Lawyers in Whiting v. City of Athens over Alleged Fake Appellate Citations in Briefs Reportedly Bearing Hallmarks of Hallucinations | View Incident
⚠️ 2026-02-07 | Claude Cowork Allegedly Deleted Folder Containing 15 Years of Family Photos While Organizing User's Wife's Desktop | View Incident
⚠️ 2025-12-15 | Kiro AI Coding Tool Was Reportedly Implicated in 13-Hour AWS Cost Explorer Outage in Mainland China | View Incident


The Legal Wire is an official media partner of:



Thank you so much for reading The Legal Wire newsletter!
If this email gets into your “Promotions” or "Spam” folder, move it to the primary folder so you do not miss out on the next Legal Wire :)
Did we miss something or do you have tips?
If you have any tips for us, just reply to this e-mail! We’d love any feedback or responses from our readers 😄
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs ("Materials"), are accurate and complete.
Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations.
The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.





