- The Legal Wire
- Posts
- OpenAI v. Musk Heads to a Jury: Mission, Money, and the Model
OpenAI v. Musk Heads to a Jury: Mission, Money, and the Model
A trial tests charitable promises in a commercial AI era.

Read time: under 6 minutes
Welcome to this week's edition of The Legal Wire!
OpenAI v. Musk just got real: a California judge cleared core claims for a late-April jury, putting charitable-mission promises and Microsoft’s role in the spotlight. Meanwhile, Washington and Northeastern governors floated an emergency power plan that would make tech bankroll new generation as AI’s energy appetite spikes: proof that “AI infrastructure” now refers to turbines, not tweets. States aren’t waiting on DC either: Illinois, Texas, and Colorado advance divergent workplace AI rules. Might this signal a patchwork compliance year? And on the cultural front, Matthew McConaughey’s trademark play to fence his voice and catchphrases previews how celebrities may police deepfakes in 2026.
The takeaway for firms and in-house teams: document governance you can defend to a jury, price rising energy and data-center exposure into deals, and assume state-by-state HR AI obligations will drive policy, while treating voice/likeness protection as an IP + contracts problem, not PR.
This week’s guest essay from Yuri Kozlov (JudgeAi) argues for a unified legal ontology, moving from “safe-to-say” heuristics to verifiable, norm-based AI decisions.
This week’s Highlights:
Industry news and updates
Why Artificial Intelligence Needs a New Ontology of Law
AI regulation updates
AI tools to supercharge your productivity
Legal prompt of the week
Latest AI incidents & legal tech map


Headlines from The Legal Industry You Shouldn't Miss
➡️ Musk Seeks Up to $134B From OpenAI and Microsoft Ahead of April Trial | Elon Musk is seeking tens of billions of dollars in damages from OpenAI and Microsoft, arguing OpenAI abandoned its nonprofit mission and benefited financially after he helped fund its early development. According to Reuters, Musk’s latest court filing puts the claimed damages between roughly $79 billion and $134 billion, tied to OpenAI’s reported $500 billion valuation and Microsoft’s gains from its partnership. The request comes as the case is set to proceed to a jury trial in late April after a federal judge declined to dismiss Musk’s claims. OpenAI and Microsoft dispute the figures and are trying to block Musk’s expert witness testimony.
Jan 19, 2026, Source: PYMNTS
➡️ OpenAI Responds to Musk Lawsuit With “The Truth Elon Left Out” | OpenAI published a detailed rebuttal to Elon Musk’s latest court filings, arguing he selectively quoted internal notes and private journal entries to support claims that OpenAI abandoned a promised nonprofit-only model. The company says the full context shows Musk agreed as early as 2017 that a for-profit structure would likely be necessary to raise the billions required to pursue the mission, with a nonprofit continuing “in some form.” OpenAI also claims negotiations collapsed because it refused to give Musk full control, and frames the litigation as part of a broader effort to slow OpenAI down while advancing xAI.
Jan 16, 2026, Source: OpenAI
➡️ Trump and Northeast Governors Push Plan to Shift Data Center Power Costs to Tech Firms | President Donald Trump and several Northeastern US governors are backing a proposal that would require major data center operators to shoulder more of the cost of new electricity generation, as surging AI-driven demand strains the regional grid. Under the plan, PJM Interconnection would be urged to run a one-time emergency auction where technology companies bid on long-term contracts to fund new power plants, potentially supporting around $15 billion in new capacity, aimed at preventing rising household electricity bills. This development reflects growing political pressure over energy affordability and the rapid expansion of power-hungry AI infrastructure, with the proposal framed as an emergency intervention rather than a permanent market redesign.
Jan 16, 2026, Source: Bloomberg Law
➡️ State AI Workplace Laws Advance Into 2026 Despite Federal Pushback | Illinois, Texas, and Colorado are moving ahead with state-level AI rules affecting employment and discrimination in 2026, even as the federal government signals it may try to limit fragmented state regulation. Illinois has expanded its human rights law to cover AI-driven workplace decisions, including worker notice requirements and limits on certain data inputs, with a private right of action. Texas has adopted a business-friendly AI governance framework, including a sandbox program and state oversight, without private enforcement. Colorado’s high-risk AI law, effective June 2026, requires impact assessments, transparency disclosures, and appeals rights, enforced by the state attorney general.
Jan 15, 2026, Source: National Law Review
➡️ McConaughey Uses Trademark Law to Guard Voice and Likeness From AI Copies | Matthew McConaughey has registered trademarks covering his image, voice, and clips featuring his signature phrase “alright, alright, alright,” aiming to deter unauthorized AI-generated impersonations. His legal team says the filings are designed to create clearer boundaries around consent, attribution, and commercial use as deepfakes proliferate across entertainment. Experts told the Wall Street Journal this appears to be a first-of-its-kind attempt by an actor to rely on trademark law, rather than copyright or publicity rights, to protect a personal likeness from AI misuse, with the added goal of preserving licensing value in an AI-driven market.
Jan 15, 2026, Source: BBC


Will this be the Next Big Thing in A.I?
Legal Technology
Why Artificial Intelligence Needs a New Ontology of Law
By Yuri Kozlov
Large language models know an unprecedented amount about reality. They understand physical processes, economic mechanisms, social dynamics, and causal relationships in complex systems. They've been trained on scientific texts, technical documentation, and empirical data that capture how the world works and what happens when certain actions are taken. Yet this isn't enough to call them normative agents.
Knowing how reality works is one thing. Being able to determine which interventions in that reality are permissible is something entirely different. You can understand all the physical laws and still not know which uses of technology are acceptable. You can see all the economic consequences of an action and still be unable to determine what regulation would be fair. The gap between "knowing how the world works" and "determining what can be done in that world" doesn't disappear with more data.
Law is a method of formal intervention in reality through the regulation of interactions. It doesn't describe the world—it sets boundaries for what's permissible within it. Working with such a task requires an ontology: a formal structure that transforms knowledge about reality into normative decisions.
What Is Normative Reasoning
Consider three scenarios. First: a judge examines a dispute over the consequences of a technological failure in an automated system. Second: a legislator creates regulation for a new biotechnology whose effects will manifest over decades. Third: an AI assistant decides how to respond to a user request that could alter physical reality through device control.

The AI Regulation Tracker offers a clickable global map that gives you instant snapshots of how each country is handling AI laws, along with the most recent policy developments.
The most recent developments from the past week:
📋 19 January 2026 | South Korea and Italy agree to strengthen cooperation in AI, chips: At the Cheong Wa Dae summit in Seoul, South Korea and Italy have issued a joint statement agreeing to deepen cooperation in high-tech sectors, including AI, semiconductors and aerospace, while expanding ties in the defense industry and critical-mineral supply chains. On the sidelines of the summit, the two countries signed a memorandum of understanding on semiconductor cooperation between the Korea Semiconductor Industry Association and Italy's Association of Electrical and Electronic Industries. The memorandum aims to promote business cooperation and information sharing in the semiconductor sector — including advanced areas such as AI — and to strengthen semiconductor supply chains.
📋 16 January 2026 | National AI Strategy Committee holds AI copyright debate: It is reported that the National AI Strategy Committee in South Korea has convened a meeting to address escalating copyright concerns between domestic creators and the AI industry, notably the “use first, compensate later” policy and liability exemptions regarding text and data mining (TDM) that have been promoted by the government and the National Assembly. The committee proposed supporting reasonable transactions in sectors with established markets, such as publishing and broadcasting, while allowing third-party use of works without clear trading markets, like online public posts, under lawful access with future revenue-sharing mechanisms. However, creators expressed skepticism about receiving fair compensation for data already used and stressed the need for legislative transparency to track data usage. The AI Strategy Committee plans to supplement the corresponding copyright tasks within the South Korea AI Action Plan by integrating the opinions presented on this meeting.
📋 16 January 2026 | Japan and ASEAN agree to cooperate on AI development: Japan and the Association of Southeast Asian Nations (ASEAN) have agreed to collaborate on developing new AI models and establishing related legal frameworks. This cooperation was formalized in a joint statement during a meeting of digital ministers from Japan and ASEAN member states in Hanoi, following a proposal by Japanese communications minister Yoshimasa Hayashi. The partnership aims to strengthen their position in the AI sector amid increasing influence from the US and China.

Future Contracts Miami is set to take place on February 25, 2026, at the Newman Alumni Centre, University of Miami.
See full agenda, speaker lineup via https://www.futurecontractsmiami.com/
A special offer for vendors is available at 10% using the code: LWFC-M1
The one-day conference will bring together legal professionals from in-house teams, private practice, legal operations, and legal technology to discuss how contracting is being reshaped by automation, artificial intelligence, and evolving commercial expectations.
Sessions will examine practical approaches to drafting, negotiating, and managing contracts, as well as the growing role of contract data and technology in supporting business outcomes.
Future Contracts Miami is organised by Cosmonauts as part of its international legal innovation event portfolio.


AI Tools that will supercharge your productivity
🆕 Avokaado - Contract lifecycle management. Contracts that think for your business. Replace manual work and disconnected tools with one system that drafts, manages, and tracks contracts automatically.
🆕 Josef - Legal AI that drives the business. Create automated Q&A, contract generation and workflow tools on Josef. They help answer questions from the business, draft high-volume documents, and move teams seamlessly from A to B.
🆕 Afriwise - Redefining legal-information sourcing and compliance in Africa. Break free of the chaos of finding and complying with African laws and regulations.
Want more Legal AI Tools? Check out our
Top AI Tools for Legal Professionals


The weekly ChatGPT prompt that will boost your productivity
Why it helps: Turns a few shorthand notes into a polished, jurisdiction-aware client update.
Instructions:
Inputs: matter name/posture; court/jurisdiction (rules if relevant); 3–5 shorthand notes (events, dates, upcoming deadlines, decisions needed); recipient type (GC/business owner); tone (formal/neutral); firm style cues.
Task: Draft a professional, full-sentence email that uses correct legal terminology for the matter and jurisdiction.
Output: subject line; 2–3 tight paragraphs: (1) what occurred since last update (with exact dates), (2) what’s next and timing under applicable rules, (3) client actions/documents needed by specific dates; placeholders if info missing; close with brief availability + signature.

Collecting Data to make Artificial Intelligence Safer
The Responsible AI Collaborative is a not‑for‑profit organization working to present real‑world AI harms through its Artificial Intelligence Incident Database.
View the latest reported incidents below:
⚠️ 2026-01-03 | Purportedly AI-Generated Images and Videos Reportedly Spread Misinformation About Nicolás Maduro's Capture on X | View Incident
⚠️ 2026-01-03 | National Weather Service Reportedly Published AI-Generated Forecast Map With Fabricated Idaho Town Names | View Incident
⚠️ 2023-12-01 | OpenDream AI Platform Reportedly Commercialized AI-Generated CSAM and Non-consensual Deepfake Sexual Images | View Incident


The Legal Wire is an official media partner of:



Thank you so much for reading The Legal Wire newsletter!
If this email gets into your “Promotions” or "Spam” folder, move it to the primary folder so you do not miss out on the next Legal Wire :)
Did we miss something or do you have tips?
If you have any tips for us, just reply to this e-mail! We’d love any feedback or responses from our readers 😄
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs ("Materials"), are accurate and complete.
Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations.
The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.



Reply