- ListedAI Daily
- Posts
- When the builders of AI don’t trust the internet
When the builders of AI don’t trust the internet
OpenAI’s Sam Altman admits he can’t tell bots from people.

Wednesday Deep Dive
(Reading Time: 4 minutes)
Your career will thank you.
Over 4 million professionals start their day with Morning Brew—because business news doesn’t have to be boring.
Each daily email breaks down the biggest stories in business, tech, and finance with clarity, wit, and relevance—so you're not just informed, you're actually interested.
Whether you’re leading meetings or just trying to keep up, Morning Brew helps you talk the talk without digging through social media or jargon-packed articles. And odds are, it’s already sitting in your coworker’s inbox—so you’ll have plenty to chat about.
It’s 100% free and takes less than 15 seconds to sign up, so try it today and see how Morning Brew is transforming business media for the better.
The Wednesday Deep Dive takes a detailed look at what's new in AI. Each week, we share in-depth insights on new tools, proven prompts, and significant developments - helping tech professionals work smarter and stay ahead.
This week’s stories show how AI is reshaping both the authenticity of digital spaces and the boundaries of regulation. One is about the CEO of OpenAI admitting he can’t tell what’s real on social media anymore. The other is about Anthropic backing a California bill that could set the first binding safety rules for frontier AI.
🤖 Sam Altman says social media feels “fake” as bots overwhelm the feed
⚖️ Anthropic endorses California’s AI safety bill, breaking with Silicon Valley peers
Let's dive in.
🌐 AI News
🤖 Sam Altman: “I assume it’s all fake”
Sam Altman admitted this week that he can no longer tell if posts on Reddit or X are written by people or bots. The OpenAI CEO, who also happens to be a longtime Reddit shareholder, said that even genuine discussions about OpenAI’s Codex now feel suspiciously artificial.
On X, Altman wrote: “I assume it’s all fake/bots, even though in this case I know codex growth is really strong, and the trend here is real.”
The paradox is striking. The man leading the company behind GPT models is now one of the loudest voices warning that the internet feels hollowed out by them.
📉 The scale of the problem:
Cybersecurity firm Imperva estimates that over half of all web traffic now comes from bots or LLMs.
X’s own bot-detection system, Grok, suggests hundreds of millions of bots roam the platform daily.
Humans themselves are adopting “LLM-speak,” creating a feedback loop where even authentic posts sound synthetic.
Critics suggest Altman’s alarm may not be entirely altruistic. Reports earlier this year indicated OpenAI is exploring its own social platform. By painting existing networks as bot-saturated, Altman could be setting the stage for a supposedly “authentic” alternative.
💡 Why it matters:
The internet’s trust crisis is no longer theoretical. If even the architects of AI can’t tell what’s real, the credibility of online discourse is at risk. For businesses, investors, and policymakers, this raises urgent questions: how do you measure sentiment, detect fraud, or build community in a space where authenticity is indistinguishable from simulation?
🌐 AI News
⚖️ Anthropic Breaks Ranks, Endorses California’s AI Safety Bill
While Altman laments the state of the internet, one of his chief rivals is making a surprising move to regulate AI's future. Anthropic has officially endorsed SB 53, a California bill that would impose new safety and transparency rules on developers of the largest AI models.
The move is a major win for the bill and creates a fracture in Silicon Valley’s largely unified opposition to state-level regulation. Tech lobbying groups and investors like Andreessen Horowitz have been actively campaigning against the legislation.
In a blog post, Anthropic acknowledged the dilemma. “While we believe that frontier AI safety is best addressed at the federal level... powerful AI advancements won’t wait for consensus in Washington.”
If passed, SB 53 would introduce several requirements for frontier model developers like OpenAI, Google, and Anthropic itself:
Publish safety and security reports before deploying powerful new models
Establish whistleblower protections for employees who raise safety concerns
Focus on preventing "catastrophic risks," defined as events causing at least 50 deaths or over a billion dollars in damage, such as AI-assisted bioweapon creation or massive cyberattacks
Apply only to companies with gross revenue over $500 million, exempting smaller startups
The endorsement from a major player like Anthropic gives the bill significant momentum. Governor Gavin Newsom has not yet taken a public stance, though he vetoed a previous, more stringent AI safety bill last year.
💡 Why this matters:
Anthropic’s endorsement signals a critical split within the AI industry on how to approach governance. Until now, the dominant position among major labs has been to advocate for self-regulation while pushing for federal, not state, oversight.
This public support from a leading lab challenges that narrative. It suggests a growing recognition that waiting for perfect federal legislation is a risky strategy when the technology is advancing so quickly. OpenAI, in contrast, sent a letter to Governor Newsom in August arguing against state-level rules that could push startups out of California.
SB 53 may become a blueprint for AI governance that other states, and perhaps even the federal government, could follow. Anthropic’s decision to back it could be the first crack in the dam of industry resistance to meaningful regulation.
💭 What This Means
Most AI labs already maintain some version of the safety policies SB 53 would require. OpenAI, Google DeepMind, and Anthropic regularly publish safety reports. But these are currently voluntary commitments with no external enforcement.
SB 53 would make these requirements legally binding, with financial penalties for non-compliance.
Anthropic's endorsement signals a bet that regulatory certainty is preferable to the current patchwork of self-governance, even if that regulation comes with compliance costs and public scrutiny.
Want to unsubscribe from future emails? Just hit “unsubscribe” below.