Kids, Chatbots, and a Molten Salt Reactor

Google signs on for next-gen nuclear to power its AI future.

In partnership with

Wednesday Deep Dive

(Reading Time: 4 minutes)

Find out why 1M+ professionals read Superhuman AI daily.

AI won't take over the world. People who know how to use AI will.

Here's how to stay ahead with AI:

  1. Sign up for Superhuman AI. The AI newsletter read by 1M+ pros.

  2. Master AI tools, tutorials, and news in just 3 minutes a day.

  3. Become 10X more productive using AI.

The Wednesday Deep Dive takes a detailed look at what's new in AI. Each week, we share in-depth insights on new tools, proven prompts, and significant developments - helping tech professionals work smarter and stay ahead.

This week’s stories hit two very different fronts of the AI expansion: child safety and energy infrastructure. One reveals the legal and ethical lines being crossed in consumer-facing AI. The other shows what it takes to power this revolution in the first place.

🧠 Texas investigates Meta & Character.AI for misleading kids with fake AI therapy bots

⚛️ Google signs nuclear deal to power AI with molten salt reactor tech

Let's dive in.

🌐 AI News

🧒 Texas Targets Meta & Character.AI for AI “Therapy” Chatbots

Texas Attorney General Ken Paxton has launched a formal investigation into Meta AI Studio and Character.AI, alleging that both platforms are marketing themselves as mental health resources without disclosing their lack of credentials, oversight, or privacy protection.

According to Paxton’s office, these AI-driven platforms have presented themselves as capable of delivering legitimate therapeutic advice, despite having no licensed professionals involved and offering no formal medical regulation. Some chatbots reportedly impersonate therapists, invent credentials, or suggest they're offering confidential, humanlike counseling, a combination that could be dangerously misleading, especially for minors.

“By posing as sources of emotional support,” Paxton said, “AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care.”

And behind the curtain? A very different reality.

Most of these platforms log and track every interaction, with chat histories feeding future model training or personalized ad targeting. Their terms of service often contradict the implied sense of safety and privacy users might expect when discussing intimate issues. The investigation centers on potential violations of Texas Deceptive Trade Practices laws, including false claims, hidden data use, and privacy misrepresentation.

Civil Investigative Demands (CIDs) have now been issued to both Meta and Character.AI. This follows an ongoing probe into Character.AI under the SCOPE Act, which governs the use of AI in sensitive services.

💡 Why it matters:

There’s a growing gray zone where AI and health intersect. While chatbots can offer helpful general information or even companionship, they are increasingly blurring the line between casual interaction and clinical guidance, especially when users are young, isolated, or vulnerable.

The platforms being investigated aren’t explicitly labeled as therapeutic tools, but they often allow users to create personas like “Therapist Lisa” or “Counselor Jake.” These personas can mimic therapeutic dialogue with high emotional fluency—sometimes even telling users to follow mental health advice without suggesting professional support.

What’s emerging is a familiar dilemma in the AI age: tools designed for entertainment or support may evolve into de facto substitutes for professionals, without carrying any of the responsibilities. And when minors are involved, the stakes get higher fast.

This won’t be the last state-led action on AI health claims. But it’s likely the first of many that try to draw a line between conversation simulators and unregulated mental health services.

🌐 AI News

⚛️ Google Backs Molten Salt Reactor to Power Its AI Ambitions

While AI raises red flags on the consumer front, it’s also pushing the U.S. toward radical shifts in energy infrastructure. This week, Google signed a landmark agreement with the Tennessee Valley Authority (TVA) to purchase power from a next-generation molten salt nuclear reactor being built by Kairos Power in Oak Ridge, Tennessee.

This marks the first U.S. power purchase agreement involving such advanced nuclear technology—an attempt to marry energy innovation with the skyrocketing power needs of AI data centers.

The project centers on Hermes 2, a demonstration plant based on Kairos’ fluoride-salt-cooled high-temperature reactor design. Unlike conventional reactors that rely on water as a coolant, Hermes 2 uses molten fluoride salt, which allows the system to run at lower pressure but higher thermal efficiency, reducing the cost, complexity, and safety risks tied to older nuclear designs.

The choice of location is symbolic. Oak Ridge was a major site of the Manhattan Project, where uranium enrichment for the first atomic bombs took place. Now it’s home to a new era of nuclear tech aimed at meeting Big Tech’s rising demand for carbon-free, reliable energy.

🚀 What Google’s doing:

The deal commits Google to purchasing clean energy attributes from Hermes 2 through TVA, certificates that represent the carbon-free nature of the electricity generated. These attributes allow Google to offset the carbon footprint of its AI operations in Tennessee and Alabama, even if the local grid still uses fossil fuels.

The long-term goal? Help Kairos deploy 500 megawatts of new nuclear capacity by 2035, which is a small fraction of America’s 97,000MW current nuclear capacity, but a meaningful step if modular reactors like Hermes can scale affordably.

💡 Why this matters:

Data centers now account for 2–3% of U.S. electricity use, a number expected to double or triple as AI expands. Google’s own emissions rose in 2024, despite its clean energy goals. If advanced nuclear can provide round-the-clock, low-carbon electricity, it could become a cornerstone of AI-era infrastructure.

Kairos’ approach isn’t just experimental. Real engineering permits back it: Hermes 1, the initial test reactor, was the first non-water-cooled reactor in over 50 years to receive construction approval from the Nuclear Regulatory Commission (NRC).

This puts Kairos (and now Google) at the forefront of what many see as nuclear’s second act: smaller, safer, and tailored to the energy-hungry future of AI and high-performance computing.

What did you think of today's AI deep dive?

Great...okay...bad?

Want to Unsubscribe from future emails? Just hit “unsubscribe” below.