The Gold standard for AI news
AI keeps coming up at work, but you still don't get it?
That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.
Here's what you get:
Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.
Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.
New AI tools tested and reviewed - We try everything to deliver tools that drive real results.
All in just 3 minutes a day
Moltbot (formerly Clawdbot) is the viral AI agent everyone's calling "the future." But security researchers just found hundreds of exposed instances leaking API keys, credentials, and full conversation histories. One researcher proved he could hijack a user's email in 5 minutes. Here's what the hype isn't telling you — and a safer alternative that launched two weeks ago.

A security researcher sent one malicious email to a Moltbot instance.
Five minutes later, the AI had forwarded the user's last five emails to an attacker's address.
That's not a theoretical risk. That actually happened. Researcher Matvey Kukuy documented the whole thing — the email, the prompt injection, the exfiltration. Five minutes. Done.
And that's just one of hundreds of exposed instances sitting on the open internet right now.
If you type "Clawdbot Control" into Shodan (a search engine for internet-connected devices), you'll find them. API keys. Bot tokens. OAuth secrets. Full conversation histories. The ability to send messages as users. All sitting there, waiting.
The creator of Moltbot, Austrian developer Peter Steinberger, describes running it on your primary machine as "spicy."
That should tell you everything.
What Moltbot Actually Is
Moltbot (it was called "Clawdbot" until Anthropic sent a trademark notice last week) is an open-source AI agent that runs locally on your Mac or server. Unlike ChatGPT or Claude, which live in a browser, Moltbot lives on your machine with access to your files, your apps, your accounts.
You control it through WhatsApp, Telegram, or Slack. You tell it what to do. It does it.
Book a restaurant reservation. Respond to emails. Manage your calendar. Execute shell commands. The project exploded to 60,000+ GitHub stars in days — one of the fastest-growing open-source projects in history.
The appeal is obvious: an AI that actually does things, not just talks about doing things.
The problem is also obvious: to do those things, it needs the keys to your digital life.
The Security Nightmare
Here's what security researchers found in the past week:
Hundreds of exposed control panels. Security firm SlowMist and independent researcher Jamieson O'Reilly discovered that many users misconfigured their Moltbot instances, leaving admin panels accessible from the open internet. Eight instances had zero authentication — anyone could run commands.
Plaintext credential storage. Moltbot stores your API keys, tokens, and passwords in plaintext Markdown and JSON files. Security firm Hudson Rock says infostealer malware families like Redline, Lumma, and Vidar are already targeting these files.
Prompt injection attacks. Because Moltbot reads your messages and emails, an attacker can embed malicious instructions in content it processes. That's how Kukuy pulled off the 5-minute email heist — the AI read the malicious email, believed it was legitimate instructions, and obeyed.
Supply chain poisoning. O'Reilly demonstrated a proof-of-concept attack where he uploaded a malicious "skill" (plugin) to ClawdHub, artificially inflated the download count, and watched developers from seven countries install it.
The project documentation acknowledges that "no perfectly secure setup exists when operating an AI agent with shell access."
They're not wrong. But that's exactly the point.
The Safer Alternative: Claude Cowork
Two weeks ago — in what might be the most interesting timing of 2026 — Anthropic launched Claude Cowork.
It's the same idea: an AI agent that does things for you on your computer. But the execution is fundamentally different.
Sandboxed access. Cowork only accesses a specific folder you designate. It runs in an isolated virtual machine. It literally cannot touch files outside that boundary.
Permission prompts. Before any destructive action (like deleting files), Cowork asks for explicit approval.
No shell access. Cowork doesn't have root privileges on your machine. It creates files, organizes folders, generates documents — but it's not running arbitrary terminal commands.
Corporate backing. Anthropic built Cowork with security researchers on staff. When something goes wrong, there's a company to fix it — not a solo developer juggling trademark disputes and crypto scammers (yes, scammers hijacked Clawdbot's accounts during the rebrand and launched a fake $CLAWD token that hit $16 million before crashing).
The tradeoff is obvious: Cowork is less powerful. It can't book your restaurant reservations or respond to your WhatsApp messages. It's a sandboxed file assistant, not a full system agent.
But it also won't leak your credentials to the open internet.
The Prompt (Copy This)
Want to understand your own risk profile for AI agents? Ask Claude:
I'm considering using an AI agent that would have access to [describe what you'd give it access to — email, calendar, files, messaging apps, etc.].
Help me think through:
1. What's the worst-case scenario if this agent is compromised?
2. What credentials or data would be exposed?
3. What would an attacker be able to do with that access?
4. What safeguards should I require before using a tool like this?
Be specific and don't sugarcoat the risks.

🛠️ Tool Worth Knowing: Claude Cowork
If you want an AI that works on your files without the security Russian roulette, Cowork is worth trying.
How to Get Claude Cowork:
🌐 Platform: Claude Desktop app (macOS only for now)
💰 Pricing: Included with Claude Pro ($20/mo) or Max ($100-200/mo)
📋 Waitlist: Available for Pro subscribers; waitlist for free tier
Point it at a folder, describe what you want, let it work.
🗞️ Quick Bites
APPLE SURRENDERS: SIRI WILL RUN ON GOOGLE GEMINI
After years of falling behind, Apple announced a multi-year deal
to power Siri with Google's Gemini AI. The upgraded assistant
launches in March with iOS 26.4. Apple is reportedly paying $1B/year.
Translation: even Apple admits it can't build a competitive AI alone.
────────────────────────────────────────────────────────────────
100+ FAKE CITATIONS FOUND IN TOP AI CONFERENCE PAPERS
AI detection company GPTZero found over 100 hallucinated citations
across 51 papers accepted at NeurIPS — one of the world's most
prestigious AI conferences. Fake author names. Non-existent DOIs.
All past peer review. The AI slop is coming from inside the house.
────────────────────────────────────────────────────────────────
AGENTIC AI MARKET TO HIT $200 BILLION BY 2034
The market for AI agents is projected to grow from $5.2B in 2024
to nearly $200B by 2034. January saw the trend accelerate with
smaller, task-specific models and enterprise deployments moving
from "experimental" to "production." The agent era is here.
⚡ Your Action Step
Before you install any AI agent that wants access to your system, your accounts, or your files, ask yourself: What happens if this gets compromised?
If the answer involves your email, your bank, your work credentials, or anything you can't easily rotate — wait. The safer tools are coming. Cowork is just the start.
The AI agent future is real. But it doesn't have to be reckless.

📚 Sources
SECURITY RESEARCH
────────────────────────────────────────────────────────────────
Hundreds of exposed Moltbot instances with credentials
→ The Register: Clawdbot becomes Moltbot, but can't shed security concerns
https://www.theregister.com/2026/01/27/clawdbot_moltbot_security_concerns/
5-minute email exfiltration via prompt injection
→ DEV Community: From Clawdbot to Moltbot
https://dev.to/sivarampg/from-clawdbot-to-moltbot-how-a-cd-crypto-scammers-and-10-seconds-of-chaos-took-down-the-4eck
Infostealer malware targeting Moltbot credential files
→ Bitdefender: Moltbot security alert
https://www.bitdefender.com/en-us/blog/hotforsecurity/moltbot-security-alert-exposed-clawdbot-control-panels-risk-credential-leaks-and-account-takeovers
CLAUDE COWORK
────────────────────────────────────────────────────────────────
Cowork launch and security model
→ TechCrunch: Anthropic's new Cowork tool offers Claude Code without the code
https://techcrunch.com/2026/01/12/anthropics-new-cowork-tool-offers-claude-code-without-the-code/
NEWS BITES
────────────────────────────────────────────────────────────────
Apple-Google Gemini partnership
→ CNBC: Apple picks Google's Gemini to run AI-powered Siri
https://www.cnbc.com/2026/01/12/apple-google-ai-siri-gemini.html
Hallucinated citations in NeurIPS papers
→ Humai Blog: AI News January 2026
https://www.humai.blog/ai-news-trends-january-2026-complete-monthly-digest/
Agentic AI market projection
→ AI Apps: Top AI News for January 2026
https://www.aiapps.com/blog/ai-news-january-2026-breakthroughs-launches-trends/
|
Pro‑Grade Material Weights in SecondsBuilt for contractors, architects, and engineers.
Trusted by Pros Nationwide. |
About This Newsletter
AI Super Simplified is where busy professionals learn to use artificial intelligence without the noise, hype, or tech-speak. Each issue unpacks one powerful idea and turns it into something you can put to work right away.
From smarter marketing to faster workflows, we show real ways to save hours, boost results, and make AI a genuine edge — not another buzzword.
Get every new issue at AISuperSimplified.com — free, fast, and focused on what actually moves the needle.
|
If you enjoyed this issue and want more like it, subscribe to the newsletter.
Brought to you by Stoneyard.com • Subscribe • Forward • Archive



