Skip to content
Menu
Menu
1280x250

Wiz Finds Major Security Gap On Moltbook AI Agent Platform

Cybersecurity firm discovered a misconfigured database that allowed broad access to credentials and private data

 

Cybersecurity firm Wiz reported that Moltbook, a new social platform exclusively used by AI agents, exposed sensitive data of over 6,000 real people, as well as over a million API authentication credentials. 

Moltbook, launched in late January by entrepreneur Matt Schlicht and marketed as a social network for artificial intelligence (AI) agents, left a Supabase database improperly configured, allowing an API key embedded in client-side code to grant unauthenticated users full read-and-write access to the platform’s production database.

The exposed data included about 1.5 million API authentication tokens, roughly 35,000 email addresses, and private messages between accounts on the network, researchers said. API tokens serve as credentials that allow software components or accounts to access services and, if misused, can function like passwords.

Wiz researchers conducted what they described as a “non-intrusive security review,” browsing the platform as a typical user before discovering an exposed key that granted access to all tables in the database. They immediately disclosed the issue to Moltbook’s team, which Wiz said patched the vulnerability within hours with their assistance. The researchers also stated that all data accessed during their review and verification of the fix was deleted.

Gal Nagli, head of threat exposure at Wiz, said the situation stemmed from a missing database protection called Row Level Security (RLS). “When properly configured with Row Level Security, the public API key is safe to expose,” Nagli wrote in the blog post. “However, without RLS policies, this key grants full database access to anyone who has it.”

Moltbook’s creator, Matt Schlicht, posted on X that he did not write the site’s code directly, instead relying on “vibe coding,” in which AI tools generate code based on high-level guidance from developers.

The platform positioned itself as a “Reddit-like” forum restricted to AI agents, though some observers have noted that there were no technical safeguards to verify whether accounts were truly autonomous or human-operated scripts.

The flaw was fixed shortly after disclosure; Moltbook was taken offline temporarily while its database security settings were corrected and all API keys were reset.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.