• The Legal Wire
  • Posts
  • Earnings Prove It: AI Spend Works, Apple Ramps, Policy Shifts, and Gavel’s Next Move

Earnings Prove It: AI Spend Works, Apple Ramps, Policy Shifts, and Gavel’s Next Move

AI Cash Machine: Microsoft $4T, Apple Ramps, Meta Pauses EU Ads

Read time: under 4 minutes

Welcome to this week's edition of The Legal Wire!

Big Tech’s bet is paying off. Microsoft hit a $4T valuation on $75B Azure sales and 100M+ Copilot users; Meta added ~$200B; Alphabet lifted capex to $85B; Amazon plans up to $118B. AI isn’t a moonshot anymore, it’s the revenue engine.

Regulation tightens just as money flows: from 2 August, the EU’s AI Act imposes transparency and copyright duties on any model trained above 10²³ FLOP, with even tougher safety rules past 10²⁵ FLOP. Providers that sign Brussels’ voluntary Code of Practice can ease the burden, but existing models must still comply by 2027.

Apple is shifting talent and capex into AI, with 20+ features live and seven AI acquisitions this year (Siri’s big upgrade slips to 2026). In policy, Sen. Mike Rounds is reviving a bipartisan AI sandbox for financial services, while Meta will pause political/issue ads across the EU in October under new transparency rules.

For the legal trenches, Gavel’s practical automation keeps lawyers in control—and its new Gavel Exec brings playbook-driven redlining and negotiation directly into Word. We spoke with CEO Dorna Moini about moving from automating documents to automating decisions. Read the full interview below!

This week’s Highlights:

  • Industry News and Updates

  • Gavel and the Rise of Practical Automation in Legal Tech

  • AI Regulation Updates

  • AI Tools to Supercharge your productivity

  • Legal prompt of the week

  • Latest AI Incidents & Legal Tech Map

Headlines from The Legal Industry You Shouldn't Miss

➡️ EU AI Rules Take Effect for General-Purpose Models | Starting August 2, the EU’s AI Act now applies to general-purpose AI (GPAI) models, introducing new requirements for transparency, copyright compliance, and responsible AI development. AI models trained with over 10²³ FLOP and capable of generating language must disclose training data summaries and follow copyright rules. The EU also published guidance and a compliance template for providers. More powerful models, those exceeding 10²⁵ FLOP, face stricter safety and reporting requirements. Providers that sign the voluntary GPAI Code of Practice may benefit from reduced regulatory burdens. Existing models must comply by August 2027.
Aug 2, 2025, Source: European Commission

➡️ Big Tech’s AI Spending Surge Pays Off, Investors Cheer | Reported by Reuters: Microsoft, Meta, Alphabet, and Amazon are pouring billions into AI, and it's working. Strong earnings from AI-powered services like cloud, ads, and search have reassured investors. Microsoft hit $4 trillion in market value after revealing $75B in Azure sales and 100M+ Copilot users. Meta added $200B to its valuation. Alphabet raised its capex forecast to $85B, and Amazon plans to spend up to $118B this year. Despite earlier concerns, investors now see the spending as justified, AI is quickly becoming Big Tech’s main growth engine.
Aug 1, 2025, Source: Reuters

➡️ Apple Boosts AI Investment, Reallocates Talent Amid Catch-Up Efforts | Apple is ramping up its AI investments and reallocating staff to focus on the technology, CEO Tim Cook said during the company’s Q3 2025 earnings call. Cook called AI “one of the most profound technologies of our lifetime” and emphasized Apple’s strategy to integrate it seamlessly across devices and services. CapEx spending is rising due to AI, though the company still relies on third-party infrastructure. Apple also revealed it's making acquisitions every few weeks to accelerate its AI roadmap, seven so far this year. Despite criticism for lagging in the AI race, Cook defended Apple’s measured approach, noting that more than 20 AI features have already launched, with more to come later this year. A major Siri upgrade, however, has been delayed until 2026.
Jul 31, 2025, Source: TechCrunch

➡️ Sen. Rounds Reintroduces AI Bill to Spur Financial Innovation | Sen. Mike Rounds reintroduced the Unleashing AI Innovation in Financial Services Act during his first hearing as chair of the Senate Banking Subcommittee on Securities, Insurance and Investments. The bipartisan bill would let regulators and financial firms collaborate on testing AI tools in a controlled environment. Rounds said the goal is to create “a safe space for experimentation” while developing regulatory frameworks that support innovation and protect consumers.
Jul 31, 2025, Source: Argus Leader

➡️ Meta to Halt Political Ads in EU Over New Rules | Reported by Bloomberg: Meta will stop selling political and issue-based ads across the European Union starting in October, citing “legal uncertainty” under new EU transparency laws. The regulations restrict how platforms like Facebook and Instagram can target voters. Meta said the rules would limit key services and reduce market choice. The move follows Google’s similar decision late last year. While political ads aren’t a major revenue driver, the decision highlights rising tensions between Meta and EU regulators, who have also challenged the company on AI tools and data use.
Jul 25, 2025, Source: Bloomberg

Will this be the Next Big Thing in A.I?

Legal Technology

Gavel and the Rise of Practical Automation in Legal Tech

In legal technology, efficiency tends to steal the spotlight. But for most lawyers, particularly those juggling high-volume workflows, the real question isn’t “can this tool save me time?”. The real question is “can it preserve the way I work while making it smarter, faster, and more scalable?”

That’s the key proposition behind Gavel.

Originally known for its powerful document automation engine, Gavel has steadily grown into a comprehensive platform designed to support legal professionals in their everyday work, by translating them into digital systems that lawyers actually control. That way, Gavel doesn’t remove lawyers from the loop, and rather gives them command over their tools.

Gavel Workflows, Gavel’s first product which has been adopted by thousands of law firms, allows firms to automate even the most complex Word and PDF documents using conditional logic, equations, and reusable variables. Its document automation capabilities have been particularly popular among solo practitioners, boutique firms, and legal aid organizations, who use Gavel to configure workflows, send client intake forms, and generate perfectly formatted, error-free documents in minutes. There’s no need to rebuild how you write or format your documents, as Gavel simply adapts to it. And what’s better than an AI tool that does everything you need it to do, while sounding like you?

What’s more, it’s gone even further with its latest development.

The AI Regulation Tracker offers a clickable global map that gives you instant snapshots of how each country is handling AI laws, along with the most recent policy developments.

The most recent developments from the past week:

📋 2 August 2025 | GPAI obligations under EU AI Act come into effect: The obligations under Title VIII, Chapter V of the EU AI Act now apply to providers of general-purpose AI (GPAI) models, in accordance with Article 113(2). From this date, providers of GPAI models placing models on the EU market are required to comply with obligations, including the preparation and maintenance of technical documentation, the provision of training dataset summaries, publication of model evaluations, and adherence to EU copyright law. For GPAI models identified as posing systemic risk, providers are additionally required to notify the AI Office, assess and document systemic risks, implement mitigation measures, and report serious incidents. Furthermore, providers of GPAI models not established within the EU are required to appoint an authorised representative based in the EU before placing their models on the EU market. The authorised representative must retain copies of the technical documentation and contact details for ten years and make them available to the AI Office or national authorities upon request.

📋 31 July 2025 | GovTech Singapore launches upgraded content guardrails for LLMs: GovTech Singapore has launched LionGuard 2, an upgraded content moderation guardrail designed for Singapore’s multilingual large-language models (LLMs). Content moderation guardrails act as filters or constraints that prevent LLMs from generating harmful, biased, or inappropriate content. LionGuard 2 now supports all four of Singapore’s official languages, English, Chinese, Malay, and partially Tamil, addressing the nation’s unique linguistic landscape, including colloquial Singlish and frequent code-switching. The model has demonstrated significant improvements in accuracy, achieving an F1 score of 87%, up from 58.4% in its predecessor, and is currently deployed on the Singapore government’s AI Guardian platform. LionGuard 2 is available as an open-source API service for any text-centric LLM system, facilitating broader adoption and ensuring robust content moderation across various applications.

📋 30 July 2025 | Keep Call Centres in America Act introduced: US senators Ruben Gallego (D-AZ) and Jim Justice (R-W) have introduced a new bipartisan bill, called the “Keep Call Centers in America Act of 2025” which proposes to ensure call centre jobs remain in the US by limiting federal benefits for companies who ship them overseas. Notably, the bill will require call center workers to immediately disclose to callers the physical location of the call center and/or whether artificial intelligence is being used. Call workers are further required to transfer communications to a call center located in the US if a customer requests that they do so.

📋 30 July 2025 | Unleashing AI Innovation in Financial Services Act introduced: US House Financial Services Committee Chairman French Hill (R-AR), Rep. Richie Torres (D-NY), Subcommittee on Digital Assets, Financial Technology, and Artificial Intelligence Chairman Bryan Steil (R-WI), and Rep. Josh Gottheimer (D-NJ), alongside Senator Mike Rounds (R-SD), Senator Andy Kim (D-NJ), Senator Thom Tillis (R-NC), and Senator Martin Heinrich (D-NM) have introduced the Unleashing AI Innovation in Financial Services Act (H.R. 4801). This bill would promote AI in financial services through regulatory sandboxes for AI test projects at federal financial regulatory agencies.

AI Tools that will supercharge your productivity

🆕 Deliberately.ai - Supercharge Your Family Law Practice with Client Intelligence

🆕 Querious - Enhancing Attorney-Client Conversations with Generative AI

🆕 The Contract Network - Safe AI for the Research Community

Want more Legal AI Tools? Check out our
Top AI Tools for Legal Professionals

The weekly ChatGPT prompt that will boost your productivity

This prompt turns hours of painful manual editing into minutes, meeting strict page limits without sacrificing persuasiveness or citation integrity.

Instructions:
Paste your over-length brief or motion and specify: target page/word limit, jurisdiction formatting rules (if any), and non-negotiable cites/arguments. Ask for an output that:

- Condenses to the limit while preserving core arguments and record/authority citations.

- Combines or removes redundancies, tightens prose, and simplifies headings.

- Provides a change log listing cuts/merges and the rationale.

- Returns both a clean version and a redline-style summary of edits.

- Flags any weakened support and suggests replacement cites if something essential was trimmed.

Collecting Data to make Artificial Intelligence Safer

The Responsible AI Collaborative is a not‑for‑profit organization working to present real‑world AI harms through its Artificial Intelligence Incident Database.

View the latest reported incidents below:

⚠️ 2025-07-18 | LLM-Driven Replit Agent Reportedly Executed Unauthorized Destructive Commands During Code Freeze, Leading to Loss of Production Data | View Incident

⚠️ 2025-05-26 | ChatGPT Reportedly Validated Autistic User's Faster-Than-Light Theory and Failed to Provide Grounding During Delusional Episode, Preceding Hospitalization | View Incident

The Legal Wire is an official media partner of:

Thank you so much for reading The Legal Wire newsletter!

If this email gets into your “Promotions” or "Spam” folder, move it to the primary folder so you do not miss out on the next Legal Wire :)

Did we miss something or do you have tips?

If you have any tips for us, just reply to this e-mail! We’d love any feedback or responses from our readers 😄 

Disclaimer

The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs ("Materials"), are accurate and complete.

Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations.

The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.

Reply

or to participate.