Autonomous Weapons and OpenAI’s Enterprise Edge

Two fronts, one AI future.

In partnership with

Wednesday Deep Dive

(Reading Time: 4 minutes)

100 Genius Side Hustle Ideas

Don't wait. Sign up for The Hustle to unlock our side hustle database. Unlike generic "start a blog" advice, we've curated 100 actual business ideas with real earning potential, startup costs, and time requirements. Join 1.5M professionals getting smarter about business daily and launch your next money-making venture.

The Wednesday Deep Dive takes a detailed look at what's new in AI. Each week, we share in-depth insights on new tools, proven prompts, and significant developments - helping tech professionals work smarter and stay ahead.

This week, we look at how AI is being adopted and weaponized—both literally and figuratively.

🔍 Global regulators convene on autonomous weapons as AI arms race accelerates

🏢 OpenAI dominates enterprise AI spend, far outpacing competitors

Let's dive in.

🌐 AI News

🤖 UN Convenes to Curb ‘Killer Robots’

Diplomats, researchers, and rights advocates gathered in New York this week for the United Nations’ first General Assembly meeting dedicated entirely to autonomous weapons systems, marking a critical moment in global efforts to regulate military AI.

The push comes during the rapid adoption of AI-powered weapons in current conflicts. From Ukraine’s semi-autonomous drones to Israel’s use of AI targeting systems in Gaza, autonomous and AI-assisted military technology is no longer on the horizon. It’s already here, and spreading fast.

Key Background:

  • Since 2014, the UN’s Convention on Certain Conventional Weapons (CCW) has held discussions about limiting or banning fully autonomous systems that act without meaningful human control.

  • U.N. Secretary-General António Guterres has set a 2026 deadline to implement global rules on AI weapons use.

  • However, consensus remains elusive. Major powers (namely the U.S., Russia, China, and India) oppose binding regulation, favoring national guidelines instead.

📦 What's New:

  • Monday’s session marks the General Assembly’s first dedicated meeting on autonomous weapons, broadening discussions beyond the CCW.

  • Talks addressed topics outside the CCW’s scope, including ethical concerns, human rights implications, and the use of AI weapons by non-state actors.

  • Campaign groups are urging states to push forward a legal treaty, arguing that without international guardrails, an arms race is inevitable.

Alexander Kmentt, Austria’s head of arms control, put it bluntly: “Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don’t come to pass.”

🤔 Why It Matters

Autonomous weapons pose a unique regulatory challenge. Unlike nuclear or chemical arms, AI systems are software-based, scalable, and rapidly evolving. Without enforceable rules, these tools could be developed, deployed, and sold with little oversight.

Human Rights Watch has warned of several open risks:

  • Accountability gaps: Who is responsible when an autonomous weapon kills unlawfully, its developer, its operator, or no one?

  • Proliferation: Without regulation, these systems are likely to spread to authoritarian regimes, terrorist groups, and private militaries.

  • Human rights violations: As AI systems become judge, jury, and executioner, key principles of international law are at risk.

Campaigners like Laura Nolan from Stop Killer Robots are skeptical of self-policing. “We do not generally trust industries to self-regulate... There is no reason why defence or technology companies should be more worthy of trust.”

The Future of Life Institute has tracked 200 autonomous systems deployed across Ukraine, the Middle East, and Africa. Russia alone has reportedly deployed 3,000 “Veter” kamikaze drones with autonomous targeting.

⚔️ Global Resistance and the Arms Race Ahead

Despite growing alarm, major military powers remain reluctant to agree on international controls. The U.S. Pentagon insists existing laws are sufficient and argues that autonomous weapons may reduce civilian casualties. China, Russia, and India declined to comment on their positions.

For now, this has left middle powers—like Austria, Ireland, and Costa Rica—leading the charge for global restrictions. Their goal: a legally binding treaty that defines what AI-assisted systems can and cannot do.

Until then, AI continues to seep into the battlefield with little clarity on the rules of engagement.

🌐 AI News

🏃 OpenAI Pulls Ahead in the Corporate AI Race

While governments grapple with AI’s role in war, OpenAI is tightening its grip on a different kind of power: corporate infrastructure.

According to new transaction data from fintech firm Ramp, OpenAI is dominating enterprise AI adoption in the U.S., rapidly outpacing its closest rivals.

📈 Key Numbers (via Ramp’s AI Index):

  • 32.4% of U.S. businesses now pay for OpenAI products as of April 2025 (up from 18.9% in January).

  • Anthropic: Just 8% of businesses use its products (up from 4.6%).

  • Google AI: Fell from 2.3% usage in February to just 0.1% in April.

Ramp's AI Index draws from anonymized card and bill pay data from over 30,000 companies.

Though imperfect, it paints a clear picture: OpenAI is scaling enterprise adoption at a velocity unmatched in the sector.

📊 Why It Matters:

OpenAI’s surge has major implications for the business of AI:

  • Network effects: As more companies use OpenAI tools, it becomes easier to justify integrations, partnerships, and long-term investment in their ecosystem.

  • Revenue flywheel: OpenAI projects $12.7 billion in revenue this year, with expectations to more than double that to $29.4 billion in 2026.

  • Enterprise agents: OpenAI is reportedly planning to offer specialized AI agents for tasks like software development and research, potentially charging thousands per customer.

This momentum reinforces OpenAI’s position as the default AI infrastructure provider for large-scale enterprises, much like Microsoft was for productivity software.

🧠 Strategic Implications:

What’s driving OpenAI’s dominance?

  1. Product differentiation: GPT-4o and its variants continue to lead in model quality, especially in reasoning and multi-modal capabilities.

  2. Early mover advantage: Thanks to ChatGPT’s consumer success, OpenAI has strong brand recognition among IT decision-makers.

  3. Partner ecosystem: Collaborations with Microsoft (Azure), Salesforce, and other enterprise vendors have boosted credibility and distribution.

Meanwhile, Google’s enterprise AI efforts continue to underperform, despite its technical prowess in models like Gemini. Anthropic, though respected, remains a distant second with limited traction outside niche enterprise verticals.

Want to Unsubscribe from future emails? Just hit “unsubscribe” below.