- The Legal Wire
- Posts
- Merry Christmas, Now Let’s Talk AI Liability
Merry Christmas, Now Let’s Talk AI Liability
Deepfakes regulated, citations sanctioned, and research rethought with realLaw AI.

Read time: under 4 minutes
Welcome to this week's edition of The Legal Wire!
From Nicola & Joel at The Legal Wire: Merry Christmas and happy holidays!
Thank you to each of our 15,227 readers for every open, share, tip, and thoughtful reply this year, you’ve turned a niche newsletter into a real community. May your break be restful, your inbox merciful, and your new year filled with AI efficiencies. We can’t wait to keep delivering sharp, trustworthy legal-tech coverage in 2026.
Europe debates a “digital omnibus” that could carve troubling AI data exceptions just as California sets a 2027 watermark standard against deepfakes; the ABA closes its AI Task Force with a blueprint for human-in-the-loop practice; another courtroom fine reminds everyone that hallucinated citations are a professional risk, not a punchline; and our feature drops into the UAE, where realLaw AI stitches federal, emirate, and free-zone law into one usable source, turning research anxiety into operational confidence.
This week’s Highlights:
Industry News and Updates
realLaw AI: Turning UAE Legal Chaos Into Clarity
AI Regulation Updates
AI Tools to Supercharge your productivity
Legal prompt of the week
Latest AI Incidents & Legal Tech Map



Headlines from The Legal Industry You Shouldn't Miss
➡️ Experts Warn EU’s ‘Digital Omnibus’ Could Create an AI Data Protection Free Zone | Legal experts warn that the EU Commission’s proposed “digital omnibus” reforms could undermine the GDPR by carving out broad exemptions for AI-related data processing. Critics argue the plan would allow companies to sidestep core privacy safeguards, weaken protections for sensitive and children’s data, and disproportionately benefit Big Tech, raising concerns about trust, digital sovereignty, and long-term consumer rights.
Dec 21, 2025, Source: Heise Online
➡️ California Passes First-in-the-Nation Law Targeting AI Deepfakes | California has enacted the AI Transparency Act of 2025 (AB 853), a landmark law requiring major online platforms to help users identify whether images, video, or audio are authentic or AI-generated. Drafted with input from UC Berkeley faculty, the law mandates provenance tools like watermarks and metadata and takes effect in January 2027, positioning California to set a de facto national standard against deceptive deepfakes.
Dec 18, 2025, Source: UC Berkeley Haas
➡️ ABA Wraps Up AI Task Force with Report on Law, Ethics, and Governance | The American Bar Association has released the final report of its Task Force on Law and Artificial Intelligence, outlining how AI is reshaping legal practice, governance, and access to justice. The report stresses the need for strong AI oversight, human verification of AI outputs, and expanded legal education, with the ABA Center of Innovation set to implement the task force’s recommendations.
Dec 16, 2025, Source: Canadian Lawyer Mag
➡️ Law Firm Fined $10,000 for AI-Generated ‘Hallucinations’ in OnlyFans Lawsuit | Reported by Reuters.: A federal judge in California has fined plaintiffs’ firm Hagens Berman Sobol Shapiro and one of its partners $10,000 after court filings in an OnlyFans-related case were found to contain AI-hallucinated material. A co-counsel was separately sanctioned $3,000 for using ChatGPT without verifying citations, underscoring courts’ growing intolerance for unvetted AI use in legal briefs.
Dec 16, 2025, Source: Reuters


Will this be the Next Big Thing in A.I?
Legal Technology
realLaw AI: Turning UAE Legal Chaos Into Clarity
If there’s one thing lawyers in the UAE consistently tell you, it’s this: finding the law should not be harder than applying it. Yet for years legal teams here have struggled with piecemeal PDFs, confusing portals, and cryptic guidance pages just to answer basic questions. These are the struggles that realLaw AI – a legal intelligence platform built by legal practitioners for legal practitioners — and born from the lived frustration of its founders, seeks to solve.
Launched in spring 2024 by Vitaly Ryzhakov and his wife, Irina, realLaw AI brings together federal, emirate-level, and free zone legislation, regulatory guidance, and case law in a single, coherent environment. Their primary goal? Usable law.

The AI Regulation Tracker offers a clickable global map that gives you instant snapshots of how each country is handling AI laws, along with the most recent policy developments.
The most recent developments from the past week:
📋 21 December 2025 Defense Ministry revives North Korea policy division, creates AI deputy tole: It is reported that South Korea's Defense Ministry has reinstated its North Korea Policy Division and established a new deputy minister position focused on AI. The Deputy Minister will oversee AI-related organizations and functions, including defense AI planning, force policy, defense informatization, and military supply management, serving as a control tower for national defense AI policies.
📋 19 December 2025 TRUMP AMERICA AI Act introduced: US Senator Marsha Blackburn (R-Tenn.) has unveiled The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry (TRUMP AMERICA AI Act). This legislative framework would codify President Trump’s executive order to create one rulebook for AI that protects children, creators, conservatives, and communities from potential harms associated with AI, while promoting innovation to ensure the nation's leadership in the global AI race. Key provisions include imposing a duty of care on AI developers to prevent foreseeable harm, reforming Section 230 to enhance parental control over children's online access, safeguarding minors on online platforms, and protecting creators' rights by requiring explicit consent for the use of their data in AI training. Additionally, the act addresses concerns about bias against conservative viewpoints in AI systems and mandates transparency regarding AI's impact on employment.
📋 18 December 2025 | Office of Communications publishes guidance on applicability of Online Safety Act to AI chatbots: The UK Office of Communications (Ofcom) has issued guidance on how the Online Safety Act applies to AI chatbots. The guidance clarifies that providers of user-to-user services, search services, and services publishing pornographic content must assess and reduce the risk of harm to their users, particularly children. Chatbots that fall within these service definitions or are part of such services are covered by the Act. This includes AI-generated content shared on user-to-user services, which is regulated in the same manner as content generated by humans. However, chatbots are not subject to the Act's regulation if they only facilitate interaction with the chatbot itself without involving other users, do not search multiple websites or databases for responses, and are unable to generate pornographic content.
📋 18 December 2025 | Spain AESIA issues guidance under the EU AI Act: The Spanish Supervisory Authority (AESIA) has published a set of detailed guidance documents and templates aimed at helping providers and deployers of high-risk AI systems under the EU AI Act comply with the relevant requirements of the law. The guidance is divided into: (1) introductory guides on an overview of the AI Act and key compliance principles; (2) technical guides on practical recommendations relating to conformity assessments, risk management systems, technical documentation, record-keeping and transparency, and human oversight requirements; and (3) toolkit of checklists and templates. The documents are subject to ongoing evaluation and review, with periodic updates in line with the development of standards and various guidelines published by the European Commission, and will be updated once the Digital Omnibus amending the AI Act is approved. All materials are currently available in Spanish only.


AI Tools that will supercharge your productivity
🆕 Legora - Legora frees you from admin so you can think sharper, move faster, and deliver more for your clients.
🆕 StructureFlow - A data-driven visual workspace that gives dealmakers and advisors the clarity and confidence to master complexity and win the room.
🆕 Harvey - Augment All of Your Work on One Integrated, Secure Platform.
Want more Legal AI Tools? Check out our
Top AI Tools for Legal Professionals


The weekly ChatGPT prompt that will boost your productivity
Why it helps: Catches contradictions that cause disputes and gives ready language to align the SOW to the MSA in minutes.
Instructions:
Paste the Master Services Agreement and the Statement of Work. Return a table of mismatches for: scope, deliverables/acceptance, fees/payment timing, term/renewal, IP ownership/license, confidentiality, indemnity/limitation, governing law/venue, and termination. For each, show: MSA rule, SOW rule, conflict, risk (H/M/L), and a fix with suggested clause snippet.

Collecting Data to make Artificial Intelligence Safer
The Responsible AI Collaborative is a not‑for‑profit organization working to present real‑world AI harms through its Artificial Intelligence Incident Database.
View the latest reported incidents below:
⚠️ 2023-09-17 | Peppermill Casino Facial Recognition System Reportedly Misidentified Individual, Leading to Wrongful Arrest in Reno | View Incident
⚠️ 2025-12-12 | Canada Revenue Agency (CRA) AI Chatbot 'Charlie' Reportedly Gave Incorrect Tax Filing Guidance at Scale | View Incident
⚠️ 2025-12-15 | Grok AI Reportedly Generated Fabricated Civilian Hero Identity During Bondi Beach Shooting | View Incident


The Legal Wire is an official media partner of:



Thank you so much for reading The Legal Wire newsletter!
If this email gets into your “Promotions” or "Spam” folder, move it to the primary folder so you do not miss out on the next Legal Wire :)
Did we miss something or do you have tips?
If you have any tips for us, just reply to this e-mail! We’d love any feedback or responses from our readers 😄
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs ("Materials"), are accurate and complete.
Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations.
The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.



Reply