Guardrails Off: The White House Doubles Down on AI

From Dropped Safeguards to Banned AI Music

Read time: under 5 minutes

Welcome to this week's edition of The Legal Wire!

The U.S. government just hit the accelerator on AI adoption, tossing out Biden-era safeguards in favor of faster rollouts and naming chief AI officers across federal agencies. Meanwhile, Meta unveiled new Llama 4 models - Scout, Maverick, and the in-progress Behemoth, promising efficiency gains thanks to a mixture-of-experts architecture. Yet, questions remain about guardrails, since these models are designed with fewer restrictions than before.

On the regulatory front, New Jersey’s new law criminalizes deceptive AI-generated media, setting a precedent for how states may handle deepfake content. Across the globe, South Korea is taking an equally hard line: banning copyright protections for fully AI-composed music. Meanwhile, The Independent shows that AI-powered summaries can outdo the original articles in reader engagement, offering a glimpse into the evolution of how we consume news.

This week’s Highlights:

  • Industry News and Updates

  • Collateral Damage of AI “Hallucinations”: Why Google’s Slip-Up with an April Fools’ Joke Should Prompt Caution

  • Dioptra: Contract Analysis with a Laser Focus on Accuracy

  • AI Tools to Supercharge your producivity

  • Legal prompt of the week

Headlines from The Legal Industry You Shouldn't Miss

➡️ White House Orders AI Expansion, Drops Biden-Era Safeguards | The White House has ordered federal agencies to name chief AI officers and develop new AI strategies, replacing Biden-era rules focused on AI safeguards. The updated memo encourages faster AI adoption by removing reporting requirements and limiting restrictions, with a focus on using U.S.-made AI and improving efficiency across government operations.
April 8, 2025, Source: Reuters

➡️ Meta Launches Llama 4 AI Models, Emphasizing Scale, Efficiency, and Fewer Guardrails | Meta has released three new Llama 4 models — Scout, Maverick, and the in-progress Behemoth — aiming to boost performance across vision, reasoning, and long-context tasks. Trained on large text, image, and video datasets, the models mark Meta’s first use of a mixture of experts (MoE) architecture, enabling higher efficiency by activating only a subset of parameters for each task.
April 5, 2025, Source: META.com

➡️ New Jersey Bans Deceptive AI Media | New Jersey has made it a crime to create or share deceptive AI-generated media, including deepfakes. The new law allows for up to five years in prison and gives victims the right to sue. The bill was inspired by student Francesca Mani, who was targeted by a deepfake and found there were no legal consequences for the perpetrator.
April 3, 2025, Source: AP News

➡️ The Independent Says AI Summaries Often Outperform Original Stories | The Independent’s new AI-powered Bulletin service, which condenses articles into short summaries using Google’s Gemini, is drawing strong reader engagement — sometimes even outperforming the original stories in traffic. Editor Chloe Hubbard said Bulletin meets demand for quick, trusted news and has generated up to a million views in a day. The AI summaries are fact-checked by journalists and bylined to the original writers. CEO Christian Broughton emphasized the tool won’t replace jobs, noting the company has hired seven staff to support it.
April 3, 2025, Source: Press Gazette

➡️ Korea Bans Copyright for AI-Generated Music | South Korea’s top copyright group, KMCA, now requires songwriters to confirm their music is 100% human-made to be eligible for registration. The policy, effective March 24, bans AI-generated content from copyright protection. False declarations may lead to withheld royalties or canceled registrations. While full AI use is prohibited, KMCA is still deciding how to handle AI-assisted works.
April 2, 2025, Source: Digital Music News

Written by: Nicola Taljaard

Legal Technology

Collateral Damage of AI “Hallucinations”: Why Google’s Slip-Up with an April Fools’ Joke Should Prompt Caution

I was reading a BBC piece about journalist Ben Black who discovered, much to his surprise, that an April Fools’ Day prank he had published years ago was being treated by Google’s AI as real news. It’s amusing at first glance, and the article is not that serious… a made-up story about a town supposedly boasting the world’s highest concentration of roundabouts, highlighted as genuine information. But the more I thought about it, the more it started to sink in that AI can unintentionally circulate misinformation – we’ve seen this happen and its certainly occurring unseen, too, and this could even cause real headaches for people on the other end of the joke.

AI “hallucinations” happen when an algorithm confidently presents something that isn’t true. Sometimes these are small errors. Sometimes they’re entire paragraphs of fabricated content pulled from the recesses of the internet. In Ben Black’s case, the system dredged up an old April Fools’ piece he’d clearly labeled as a prank and presented it as a hard fact… no disclaimers, no second-guessing. On the surface, this might not sound like a big deal. After all, a fictional story about roundabouts in a small Welsh town doesn’t exactly threaten national security. But it does raise a bigger question: if AI can get this tangled up in an obviously satirical piece, what happens when it misidentifies more serious stories? And do the potential consequences impact those consuming the information, or also those creating the content that’s been misconstrued?

Will this be the Next Big Thing in A.I?

Legal Technology

Dioptra: Contract Analysis with a Laser Focus on Accuracy

Legal professionals have no shortage of tools claiming to change the way they work, so what does it take to distinguish a good tool from a great one? For a select group of in-house and law firm insiders, Dioptra has quietly emerged as the industry’s best-kept secret. Its approach to contract review is to laser focus on reliability. Dioptra’s reputation has expanded through word-of-mouth, fueled by the genuine enthusiasm of its users.

Developed by AI veterans, Dioptra is an AI-enabled contract review and insights platform that enables teams to be more strategic, faster, and more compliant from the initial contract turn to post-signature management. Dioptra is structured around one central premise: accuracy matters… a lot.

AI Tools that will supercharge your productivity

🆕 Juristic - Automates legal deliverables, tasks and processes with visualisation - and combines it your knowledge and templates sprinkled with legal project management and AI.

🆕 eEvidence - Your digital trust services provider. Whether you communicate, notify, sign, create or share files, trust and legal certainty come by default - now and forever.

🆕 Pandektes - AI and contextual search to uncover insights across legislation, case law, and proprietary data.

Want more Legal AI Tools? Check out our
Top AI Tools for Legal Professionals

The weekly ChatGPT prompt that will boost your productivity

This prompt helps lawyers quickly compile and customize a go-to library of standard clauses, saving time on drafting while ensuring consistency and compliance. Perfect for busy practitioners looking to streamline contract creation and maintain high-quality legal documents.

Prompt: Assemble a collection of commonly used contract clauses for [specific practice area—e.g., employment agreements, NDAs, or partnership contracts]. For each clause, provide a short description of its purpose and essential legal considerations. Include guidance on how to tailor the clause to specific circumstances or jurisdictions.

Thank you so much for reading The Legal Wire newsletter!

If this email gets into your “Promotions” or "Spam” folder, move it to the primary folder so you do not miss out on the next Legal Wire :)

Did we miss something or do you have tips?

If you have any tips for us, just reply to this e-mail! We’d love any feedback or responses from our readers 😄 

Disclaimer

The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs ("Materials"), are accurate and complete.

Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations.

The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.

Reply

or to participate.