- ListedAI Daily
- Posts
- ChatGPT users flee to Claude, workers revolt over Pentagon deals
ChatGPT users flee to Claude, workers revolt over Pentagon deals
OpenAI faces consumer exodus and worker revolt as defense partnerships trigger mass uninstalls and industry-wide activism

Wednesday Deep Dive
(Reading Time: 4 minutes)
Want to get the most out of ChatGPT?
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.
The Wednesday Deep Dive takes a detailed look at what's new in AI. Each week, we share in-depth insights on new tools, proven prompts, and significant developments - helping tech professionals work smarter and stay ahead.
This week's stories converge on a single fault line: how AI companies navigate their relationships with defense agencies while managing internal dissent and consumer backlash. From app uninstalls to coordinated worker activism, the tension between commercial ambition and ethical boundaries has never been more visible.
📱 ChatGPT uninstalls surge 295% after Department of War partnership
✊ Google and OpenAI workers coordinate demands for military AI limits
Let's dive in.
Quick note! I am proud to recommend the AI to ROI Newsletter to our subscribers. This information rich AI newsletter covers the major news happening in AI every week via digestable, daily editions that are specific to different topics including: 1) AI Big Story of the Week; 2) AI Use Case of the Week; 3) AI Metric of the Week; 4) AI Report of the Week and; 5) Weekly New and Analysis where they currate the top 10 news stories and analysis from the hundreds of newsworthy events each week. They filter, curate, and analyze the most important stories and trends to reduce the time you spend on filtering out the noise.
The AI to ROI Newsletter is co-authored by B2B Software veterans Ray Rike, CEO of Benchmarkit, and Peter Buchanan, CEO NewPlan, who apply their operating experience to bring you not only the news, but why each story matters. You can see the Big Story of the Week covering the three top SaaS to AI-First playbooks covering ServiceNow, Notion and Canva by clicking here. If you prefer reading the latest week's News and Analysis, click here.
🌐 AI News
📱 Users Flee ChatGPT After Defense Department Deal

OpenAI's partnership with the Department of Defense (recently rebranded under the Trump administration as the Department of War) triggered an immediate consumer backlash that shows up clearly in the data.
According to market intelligence firm Sensor Tower, US app uninstalls of ChatGPT jumped 295% day-over-day on Saturday, February 28, shortly after the partnership was announced. That's a sharp departure from ChatGPT's typical 9% day-over-day uninstall rate measured over the previous 30 days.
At the same time, consumers didn't just leave. They migrated.
🏃 Where They Went:
Downloads of Anthropic's Claude spiked 37% on Friday and 51% on Saturday after the company publicly announced it would not partner with the US defense department. Anthropic cited concerns over AI being used for domestic surveillance and fully autonomous weaponry, areas where the technology isn't yet safe or ready.
The shift was dramatic and immediate:
ChatGPT's US downloads dropped 13% day-over-day on Saturday and another 5% on Sunday.
Claude hit No. 1 on the US App Store by Saturday, jumping over 20 ranks from the week prior.
One-star reviews for ChatGPT surged 775% on Saturday, then grew another 100% on Sunday.
Five-star reviews for ChatGPT declined 50% during the same period.
Third-party providers confirmed the trend. Appfigures reported that Claude's total daily US downloads surpassed ChatGPT's for the first time on Saturday, with Claude's downloads up 88% day-over-day. Claude also became the No. 1 free iPhone app in six countries outside the US, including Canada, Germany, Norway, and Switzerland.
Similarweb noted that Claude's US downloads over the past week were roughly 20x higher than in January, though the firm cautioned that other factors beyond the political fallout could be contributing.
🤔 Why It Matters:
This isn't just noise. It's a consumer referendum on how AI companies align themselves with government power.
Users voted with their taps. When OpenAI chose to partner with defense agencies, a meaningful segment of its user base decided that it violated their trust. And when Anthropic declined that same partnership on ethical grounds, it gained a competitive advantage overnight.
The speed and scale of this shift raise questions about brand loyalty in the AI era. If users can switch models as easily as switching apps, companies may face real commercial risk when they take controversial positions on national security, surveillance, or weapons development.
For OpenAI, the fallout is a reminder that public perception matters, even in enterprise-heavy markets. And for Anthropic, the boost proves that ethical positioning can be a growth lever, not just a PR talking point.
The question now is whether this consumer behavior sticks, or whether convenience eventually overrides principle.
🌐 AI News
✊ Google and OpenAI Workers Demand Military AI Limits

While users were uninstalling apps, employees inside the AI labs were organizing.
Hundreds of workers at Google and OpenAI have signed onto joint demands for stricter limits on military AI applications, according to sources familiar with the organizing efforts. This marks the first coordinated worker action between the two competing AI giants, and it comes amid escalating military operations in Iran that rely heavily on AI-powered targeting and surveillance systems.
The employee letters, circulating internally since late last week, call for explicit prohibitions on:
Using company AI systems for autonomous weapons targeting
Offensive cyber operations
What organizers describe as "ethically ambiguous intelligence applications"
One Google software engineer told colleagues that the Iran military operations, which have increasingly relied on AI-enhanced surveillance and strike coordination, represent exactly the kind of use case employees want blocked.
🛡️ The Anthropic Factor:
The organizing effort was directly influenced by the Pentagon's blacklisting of Anthropic's AI models. The company was placed on a restricted-use list not for technical failures, but over what Pentagon officials described as insufficient cooperation on safety audits and access protocols.
That designation effectively bars Anthropic from lucrative government AI contracts, a market expected to hit $50 billion annually by 2028. The move sent shockwaves through Silicon Valley and galvanized employees at other labs who saw it as proof that taking ethical stands carries real commercial consequences.
📣 What Employees Are Saying:
"We're not anti-defense," one OpenAI researcher explained in internal messages reviewed by sources. "We're anti-unaccountable AI in life-or-death scenarios."
The tone of the demands reflects a belief that AI systems are being deployed in military contexts faster than their safety, reliability, or ethical frameworks can keep up. Workers are asking for transparency, oversight, and the ability to opt out of projects that conflict with their values.
🤔 Why It Matters:
This is more than an internal HR issue. It's a strategic vulnerability for AI companies competing for defense contracts worth billions.
Worker activism at this scale can:
Slow down product development timelines
Create reputational risks that spook commercial clients
Force leadership to choose between lucrative government deals and employee retention
The fact that Google and OpenAI employees are coordinating suggests this isn't isolated dissent. It's an industry-wide reckoning over how AI gets used in warfare, surveillance, and national security.
For companies like OpenAI, the timing is especially awkward. The consumer backlash over the DoD deal is now paired with internal resistance from the engineers building the models in the first place. That's pressure from both sides of the business.
Want to unsubscribe from future emails? Just hit “unsubscribe” below.

