
Read time: under 10 minutes
Welcome to this week's edition of The Legal Wire!
This week, legal AI’s biggest question was control. Intapp doubled down on “AI with guardrails,” partnering with Anthropic to build Claude-powered agents while wiring its compliance walls directly into Harvey to close the governance gap as firms scale drafting and research. In parallel, Washington moved in the opposite direction: Trump ordered federal agencies to phase out Anthropic after the Pentagon branded it a “supply-chain risk,” turning AI ethics into procurement leverage.
Meanwhile, the infrastructure bill came due. Oracle is facing a securities suit over how it described the costs and risks of its AI buildout, spotlighting what investors now demand: clearer disclosure on capex, timelines, and exposure tied to hyperscale partnerships. And in Vietnam, the central bank proposed strict transparency rules for AI in banking: tell customers when they’re talking to AI, disclose generated content, and guarantee human appeals.
This week’s feature poses the lesser asked question behind all of it: when legal tech stops adding features, what’s it really optimizing for?
This week’s Highlights:
Industry News and Updates
When legal tech stops adding features, what’s it really optimizing for?
AI Regulation Updates
AI Tools to Supercharge your productivity
Legal prompt of the week
Latest AI Incidents & Legal Tech Map


Headlines from The Legal Industry You Shouldn't Miss
➡️ Intapp Partners with Anthropic and Harvey to Add Agentic AI and Guardrails | Intapp struck two deals aimed at accelerating AI adoption in law firms without losing governance. First, it will build its own AI agents on Anthropic’s Claude models, positioning Claude as the underlying engine for workflow automation across compliance and client lifecycle tasks. Second, Intapp will integrate its “Walls for AI” compliance and ethical-safeguards tooling into Harvey’s platform, targeting what Intapp calls the emerging governance gap as firms scale generative AI for drafting and research. The tie-ups reflect a broader trend: established legaltech vendors pairing AI capability with embedded controls so firms can move faster without reputational risk.
Mar 2, 2026, Source: Global Legal Post
➡️ Trump Orders U.S. Agencies to Stop Using Anthropic, Pentagon Labels It a “Supply-Chain Risk” | President Trump has directed federal agencies to phase out Anthropic’s AI, after the Pentagon moved to designate the company a national-security “supply-chain risk” following a dispute over military access and safeguards. Anthropic says it refused to allow unrestricted use of its models for mass domestic surveillance or fully autonomous weapons, and plans to challenge the designation in court. The move also appears to accelerate a competitive reshuffle, with rivals (notably OpenAI) moving quickly to deepen defense ties under their own stated “red lines.”
Feb 27, 2026, Source: BBC
➡️ Oracle Hit With Securities Suit Over AI Buildout Disclosures | A securities class action has been filed against Oracle alleging it misled investors about the scale and risks of its AI infrastructure push, particularly capital spending, data-center timelines, and exposure tied to high-stakes partnerships such as OpenAI. The complaint points to delayed projects and a key backer stepping away, raising questions about whether Oracle adequately disclosed cash-flow strain, debt needs, and large lease obligations supporting its cloud expansion. The case spotlights how hyperscalers communicate the financial trade-offs of AI growth, and could influence investor confidence, Oracle’s cost of capital, and how it structures future AI infrastructure deals.
Feb 25, 2026, Source: Yahoo Finance
➡️ Vietnam’s Central Bank Tightens AI Rules for Banks and E-Wallets | Vietnam’s State Bank is proposing new AI governance rules that would require banks and payment providers to tell customers when they’re interacting with AI tools like chatbots, virtual assistants, or automated hotlines, and to disclose any AI-generated content. The draft also adds stricter transparency around emotion recognition and biometric classification, bans AI-driven marketing that exploits vulnerable customers, and gives users a right to appeal AI decisions to a human reviewer. The rules are expected to start in March, with existing systems given until September 2027 to comply.
Feb 24, 2026, Source: MarketTech APAC


Will this be the Next Big Thing in A.I?
Legal Technology
When legal tech stops adding features, what’s it really optimizing for?
If you’ve been paying attention to legal technology over the past few years, one thing becomes clear fairly quickly: progress rarely moves in a straight line. It tends to arrive in phases. Tools emerge to solve specific problems, others follow to handle adjacent needs, and over time they begin to stack on top of each other, overlapping and competing for the same moments in a lawyer’s workflow, until someone has to step back and figure out what actually belongs together.
Legal tech is now firmly in that phase.
For much of the last decade, success in this market has been framed around expansion. New features were taken as a sign of momentum, and new modules as proof of ambition. That framing made sense while firms were still testing what technology could realistically take on. It becomes harder to defend once software starts shaping how decisions are made, how money moves, and where responsibility sits.
While much of the conversation, myself included, has focused on what platforms can do, more interesting conversations are emerging about what platforms choose to take responsibility for and where they draw the line.
I spoke with Leslie Witt, Chief Product Officer at 8am, to explore how those boundaries are being defined from a product perspective. The discussion was about how product leaders think through integration, financial controls, artificial intelligence, and platform design once software moves beyond isolated tasks and into daily operations. A full Q&A with Leslie is available at the end of this article.


The AI Regulation Tracker offers a clickable global map that gives you instant snapshots of how each country is handling AI laws, along with the most recent policy developments.
The most recent developments from the past week:
📋 2 March 2026 | South Korea and Singapore hold summit on AI and tech cooperation: The South Korean President, Lee Jae-myung, during his state visit to Singapore, held a summit with Prime Minister Lawrence Wong to discuss deepening the strategic partnership between the two nations. They agreed to enhance cooperation in defense, trade, AI, and energy sectors. Notably, they decided to initiate negotiations to upgrade the Korea-Singapore Free Trade Agreement and to establish an AI cooperation framework. They also committed to collaborating on small modular reactors and to supporting each other's roles in regional and international affairs, including ASEAN initiatives and Korean Peninsula peace efforts.
📋 27 February 2026 | Canada AI minister and OpenAI to discuss AI safety in response to school schooting: Canada’s AI and Digital Innovation Minister Evan Solomon has criticised OpenAI’s response to the Tumbler Ridge shooting and plans to meet Sam Altman, saying the company hasn’t provided enough detail on how it will implement stronger threat detection and law-enforcement referrals. OpenAI says it’s improving protocols, but Solomon warns regulation may follow, including potential reporting duties for credible threats.
📋 27 February 2026 | President Trump orders US federal agencies to stop use of Anthropic technology amid dispute over ethics of AI: In a post on Truth Social, President Donald Trump said he will be directing all federal agencies to immediately cease using technology from Anthropic following a deadlock between the Department of Defense and Anthropic over ethical guidelines for AI systems. The Pentagon had demanded that Anthropic relax its ethical constraints, but the company refused, leading to the termination of their collaboration. Defense Secretary Pete Hegseth subsequently designated Anthropic as a supply-chain risk to national security, a classification typically reserved for foreign adversaries, potentially jeopardising the company's partnerships with other businesses.


AI Tools that will supercharge your productivity
🆕 DealCloser - Fewer tasks, faster closings. Experience the future of transaction management with DealCloser—the all-in-one platform engineered to accelerate your deal velocity.
🆕 Andri AI - Agentic Legal AI for UK and European law firms. AI that reasons through legal complexities, seeks information autonomously, and makes strategic decisions with the judgment of an experienced legal professional.
🆕 Eltemate - A Hogan Lovells technology company. Combining innovative legal AI technology with profound expertise in law.
Want more Legal AI Tools? Check out our
Top AI Tools for Legal Professionals


The weekly ChatGPT prompt that will boost your productivity
Why it helps: Converts a long contract into a client-ready brief with the key terms, risks, and dates, saving you time and reducing follow-up questions.
Prompt:
Paste the contract and state who you represent and the jurisdiction. Return:
A 150–200 word plain-English summary in full sentences.
The 5 terms that matter most (payment, term/renewal, termination, liability, IP/confidentiality).
Top 3 risks with a recommended next step for each (accept / clarify / renegotiate).
Any dates to calendar (renewal notice, payment dates, deliverables).

Collecting Data to make Artificial Intelligence Safer
The Responsible AI Collaborative is a not‑for‑profit organization working to present real‑world AI harms through its Artificial Intelligence Incident Database.
View the latest reported incidents below:
⚠️ 2026-02-17 | Purportedly AI-Generated Sepsis Alert Reportedly Prompted Potentially Inappropriate IV Fluid Administration for a Dialysis Patient, Averted by Clinician Intervention | View Incident
⚠️ 2026-02-10 | OpenAI Allegedly Did Not Alert RCMP After ChatGPT Flagged Violent Chats Before British Columbia School Shooting | View Incident
⚠️ 2025-12-09 | Purportedly AI-Generated Video Allegedly Depicted Radnor High School Students Inappropriately, Prompting Police Investigation | View Incident


The Legal Wire is an official media partner of:



Thank you so much for reading The Legal Wire newsletter!
If this email gets into your “Promotions” or "Spam” folder, move it to the primary folder so you do not miss out on the next Legal Wire :)
Did we miss something or do you have tips?
If you have any tips for us, just reply to this e-mail! We’d love any feedback or responses from our readers 😄
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs ("Materials"), are accurate and complete.
Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations.
The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.




