- The Legal Wire
- Posts
- Regulators Tighten the Screws, But AI Still Finds a Way to Win in Court
Regulators Tighten the Screws, But AI Still Finds a Way to Win in Court
State Crackdowns, Global Crypto Greenlights, and the AI Play That Won $20 Million

Read time: under 4 minutes
Welcome to this week's edition of The Legal Wire!
New York just vaulted to the front of America’s AI-policy race: the RAISE Act would force any company spending $100 million or more on a “frontier model” to file safety plans, report incidents, and open its black box to state auditors. Meta and IBM warn the rules could chill research, yet lawmakers say they merely codify best practice while Washington dithers.
The policy wave is spreading. Vietnam has legalized crypto and rolled out tax breaks for chips and AI startups, hoping to lure talent that once fled offshore. In Europe, Deutsche Telekom and Nvidia are wiring up a 10,000-GPU industrial cloud to anchor “digital sovereignty,” even as the UK passes a new data law that leaves artists unprotected from AI scraping, and promises to fix that later.
On Capitol Hill, Senator Cynthia Lummis floated a bill shielding AI developers from lawsuits, if they publish detailed system disclosures. Professionals who rely on those tools would still carry the legal risk.
Amid the rule-making flurry, AI is already rewriting courtroom playbooks. In a recent $20 million medical-malpractice verdict, the plaintiff’s team tapped Expert Institute’s AI to spot the perfect orthopedic expert and dig up prior testimony that dismantled the defense, proof that while regulators debate, savvy litigators are turning algorithms into winning arguments.
This week’s Highlights:
Industry News and Updates
How AI-Powered Expert Intelligence Won a $20M Med-Mal Case
AI Regulation Updates
StructureFlow Turns Diagrams Into Dialogue for Legal Teams
AI Tools to Supercharge your producivity
Legal prompt of the week
Latest AI Incidents & Legal Tech Map


Headlines from The Legal Industry You Shouldn't Miss
➡️ New York Moves to Regulate Frontier AI Developers | New York could become the first U.S. state to regulate frontier AI, as lawmakers passed the Responsible AI Safety and Education Act (RAISE) targeting companies spending $100M+ on advanced models. The bill, championed by Assemblyman Alex Bores, would require safety plans, transparency, and disclosures of serious AI incidents. While industry giants like Meta and IBM oppose the measure, citing innovation risks, Bores says it enforces what firms already claim to do. The bill awaits Gov. Kathy Hochul’s decision, amid rising calls for states to lead where federal AI policy lags.
June 18, 2025, Source: Times Union
➡️ Vietnam Passes Landmark Digital Law, Legalizing Crypto and Boosting AI, Chips | Vietnam’s National Assembly has passed a sweeping new law legalizing digital assets and offering major incentives for semiconductors, AI, and digital startups. The legislation, set to take effect in 2026, defines virtual and crypto assets under civil law, ending years of regulatory uncertainty that had driven firms offshore. It also introduces tax breaks and subsidies to attract tech investment, with the aim of growing Vietnam’s digital economy and becoming a key player in the global semiconductor supply chain.
June 16, 2025, Source: Decrypt
➡️ Deutsche Telekom and Nvidia to Build AI Cloud for European Industry | Deutsche Telekom and Nvidia will create the first industrial AI cloud in Germany by 2026, the companies announced. Nvidia will provide 10,000 chips, while Telekom handles infrastructure, operations, and AI services. Chancellor Friedrich Merz welcomed the move as key to Germany’s digital sovereignty.
June 13, 2025, Source: Reuters
➡️ UK Passes Data Bill Without AI Copyright Protections | The UK’s Data (Use and Access) Bill has passed Parliament without amendments to protect creatives from AI training on copyrighted work. A proposed fix from the House of Lords, backed by over 400 artists including Dua Lipa and Kazuo Ishiguro, was rejected. The government says separate AI legislation is coming.
June 13, 2025, Source: Screendaily
➡️ GOP Bill Would Shield AI Firms from Lawsuits, If They're Transparent | Sen. Cynthia Lummis is proposing new legislation that would protect AI developers from lawsuits, but only if they disclose how their systems work. The Responsible Innovation and Safe Expertise Act, first reported by NBC News, aims to clarify that licensed professionals like doctors or lawyers remain liable for decisions made using AI tools, not the AI firms themselves. The bill wouldn’t apply to areas like self-driving cars or shield developers who act recklessly. Lummis says the goal is to balance innovation with transparency and accountability.
June 12, 2025, Source: NBC News


Legal technology
How AI-Powered Expert Intelligence Won a $20M Med-Mal Case
In a high-stakes medical malpractice case involving a former MLS goalkeeper, the plaintiff’s legal team used Expert Institute’s AI-driven expert intelligence to secure a $20M verdict. From identifying a top orthopedic surgeon to uncovering past testimony that undercut the defense, the case shows how litigation teams are leveraging AI and tech-forward strategies to drive results in the courtroom.
Watch how expert intelligence made the difference here.


The AI Regulation Tracker offers a clickable global map that gives you instant snapshots of how each country is handling AI laws, along with the most recent policy developments.
The most recent developments from the past week:
📋 16 June 2025 | European Commission calls for experts for EU AI Act scientific panel: The European Commission has opened a call for expression of interest to select sixty independent experts for a scientific panel supporting the implementation and enforcement of the EU AI Act. The panel will advise the European AI Office and national authorities on general-purpose AI models and systems, specifically regarding systemic risks, model classification, evaluation methodologies, and cross-border market surveillance. The panel will also alert the AI Office to emerging risks. The call ends on 14 September 2025…
📋 15 June 2025 | South Korea government appoints its first ever AI secretary: It is reported that President Lee Jae-myung has appointed Naver AI Innovation Centre Head, Ha Jung-woo, as South Korea government’s first senior presidential secretary for AI. It is the first time for the country’s presidential office to have a senior secretary with a mandate to handle the country’s investments and policies for AI infrastructure. The senior secretary will be reporting to the chief of staff for policy...
📋 11 June 2025 | Unions will push AI regulation and pay at productivity summit: It is reported that unions are advocating for the regulation of AUI in the workplace and for workers to receive a larger share of productivity gains through increased wages during Prime Minister Anthony Albanese's newly announced summit. White-collar unions are calling for the government to implement a "digital just transition" for employees impacted by AI, drawing parallels to protections for workers in coal and gas industries transitioning to renewable energy, and are also seeking compensation for workers whose data is utilized in AI training.


Will this be the Next Big Thing in A.I?
Legal Technology
StructureFlow Turns Diagrams Into Dialogue for Legal Teams
From Whiteboard to Walkthrough
There’s a certain irony to modern legal work: while documents have gone digital, structure charts, timelines, and step plans still get scribbled on whiteboards or patched together in PowerPoint. StructureFlow, a legal tech company based in the UK, wants to change that, not by reinventing legal thinking, but by giving professionals a better way to show what they already know.
Founded in 2018 by former corporate lawyer Tim Follett, StructureFlow focuses on visual modeling for complex legal, corporate, and financial scenarios. The idea is simple: if you can picture the structure, you’re halfway to explaining it. And when your team, client, or counterpart actually understands it? Deals move faster, disputes make more sense, and meetings stop spinning in circles.


AI Tools that will supercharge your productivity
🆕 Legora - AI Workspace for Legal Teams | Simplify Complex Legal Work
🆕 Fileread - Query unlimited documents, verify findings, and build stronger fact memos, chronologies, and summaries, powered by AI and the Relativity integration.
🆕 Onit - Manage your enterprise legal workflows, contracts, vendors, and spend on a single, AI-native platform.
Want more Legal AI Tools? Check out our
Top AI Tools for Legal Professionals


The weekly ChatGPT prompt that will boost your productivity
This prompt instantly converts messy notes into clear, shareable minutes, reducing post-meeting admin time, keeping everyone accountable, and ensuring nothing slips through the cracks.
Instructions:
Paste your rough notes or transcript from a client/team meeting. Ask your local secure LLM to deliver:
- A concise summary of decisions made.
- A bullet list of action items with assignees and deadlines.
- Key follow-up questions or documents needed.
- A brief next-meeting agenda (optional).


Collecting Data to make Artificial Intelligence Safer
The Responsible AI Collaborative is a not‑for‑profit organization working to present real‑world AI harms through its Artificial Intelligence Incident Database.
View the latest reported incidents below:
⚠️ 2025-04-29 | Meta AI App Reportedly Publishes Personal Chats Without Users Fully Realizing | View Incident
⚠️ 2025-04-25 | Factum in Ko v. Li Allegedly Contains AI-Generated Case Law Citations | View Incident
⚠️ 2025-06-12 | Google AI Overview Reportedly Misstates Aircraft Manufacturer as Airbus Instead of Boeing in Air India Flight 171 Crash | View Incident




Thank you so much for reading The Legal Wire newsletter!
If this email gets into your “Promotions” or "Spam” folder, move it to the primary folder so you do not miss out on the next Legal Wire :)
Did we miss something or do you have tips?
If you have any tips for us, just reply to this e-mail! We’d love any feedback or responses from our readers 😄
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs ("Materials"), are accurate and complete.
Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations.
The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.
Reply