In partnership with

Reading Time: 9 minutes

Hey Prompt Lover,

I owe you an apology first.

This happened on Tuesday and I didn't get this to you until today. I was heads down on something else and by the time I came up for air, half the internet had already seen it.

But here's the thing — most of what went around was technical and fast and easy to miss the actual meaning of.

So I'm going to slow it down, tell you exactly what happened, and more importantly, tell you why it matters to you even if you've never written a line of code in your life.

Because this one is bigger than it looks.

Email Still Wins. Here's How to Use It Better.

59% of Americans say most marketing emails offer no real value. That's not a threat, it's an opening. Get the AI-powered playbook for building email campaigns that actually convert.

Inside you'll discover:

  • How top brands achieve 3,600% ROI from email marketing

  • AI personalization techniques that drive 82% higher conversion rates

  • Tactics that have delivered 30% better open rates and 50% higher clickthroughs

  • How to build sequences for every stage of the customer journey, from welcome to re-engagement

Download your free AI-powered email marketing playbook today.

What Actually Happened

At 4:23 in the morning on Tuesday March 31st, someone posted a link on X.

That link pointed to something Anthropic very much did not want public.

The official npm package for Claude Code — the package developers download to install and update the tool — had shipped with a map file that exposed what appears to be the popular AI coding tool's entire source code.

Let me translate that for anyone who isn't a developer.

When software companies release their tools, they bundle the code in a way that makes it hard to read.

It's not quite the same as hiding it, but it's not meant to be an open book either. A map file is a debugging tool that bridges the bundled version back to the original readable source.

It's meant to stay internal.

Someone at Anthropic forgot to remove it before pushing the update live.

A file used internally for debugging was accidentally bundled into a routine update of Claude Code and pushed to the public registry developers use to download and update software packages. The file pointed to a zip archive on Anthropic's own cloud storage containing the full source code.

That zip file had around 500,000 lines of code across roughly 1,900 files.

A security researcher named Chaofan Shou spotted it within hours and told the world. Snapshots of Claude Code's source code were quickly backed up in a GitHub repository that has been forked more than 41,500 times so far.

The post on X amassed more than 21 million views.

By Wednesday, Anthropic was sending copyright takedown requests trying to pull down the thousands of copies that had already spread across GitHub. Then — and this is where it gets almost darkly funny — Anthropic executives said it was an accident and retracted the bulk of the takedown notices.

So they leaked the code. Then they accidentally took down thousands of unrelated GitHub repos trying to contain it. Then they had to walk that back too.

Three mistakes in three days.

And That Was The Second Thing That Went Wrong That Week

Before the source code leak, there was something else.

Earlier that same week, Fortune reported that the company had inadvertently made close to 3,000 files publicly available, including a draft blog post that detailed a powerful upcoming model.

That model is known internally as both Mythos and Capybara.

Anthropic had not announced it. They didn't mean to announce it. But there it was, sitting in a publicly accessible data cache, describing a next-generation model and apparently flagging unprecedented cybersecurity risks.

So in the space of one week, Anthropic accidentally revealed their next big model and then accidentally revealed the full source code of their most valuable product.

Why This Matters Even If You Don't Write Code

Here's what was actually inside that source code. And this is the part that should interest everyone, not just developers.

The leaked code contained dozens of feature flags for capabilities that appear fully built but haven't shipped yet, including the ability for Claude to review what was done in its latest session to study for improvements in the future while transferring learnings across conversations.

A "persistent assistant" running in background mode that lets Claude Code keep working even when a user is idle. And remote capabilities allowing users to control Claude from a phone or another browser.

In other words, the features you've been reading about in this newsletter — the ones I've been telling you are coming — they're already built.

They're sitting behind flags, waiting to be turned on.

That's actually good news for us as users. The roadmap is real. The shipping pace makes sense now.

But here's where it gets uncomfortable.

By exposing the blueprints of Claude Code, Anthropic has handed a roadmap to researchers and bad actors who are now actively looking for ways to bypass security guardrails and permission prompts.

Because the leak revealed the exact orchestration logic for how the tool handles security, attackers can now design malicious approaches specifically tailored to trick Claude Code into running background commands or exfiltrating data.

That's the part that matters to you even if you don't touch code.

The tools you use every day for your work run on this infrastructure. The security of those tools just got a little more complicated.

The Bigger Problem Nobody Is Saying Loudly Enough

There's a concept going around in security circles right now called dark code.

Here's what it means and why it matters.

Before AI, every line of code was written slowly and deliberately by a human being who had to understand it. Programmers were forced to deeply know the systems they were building because they built them manually.

Now AI agents write code extremely fast. And when code gets written that fast, nobody fully understands all of it — including the people shipping it.

The agent makes decisions in real time. The reasoning and steps it takes can disappear.

The documentation doesn't always keep up.

Sarah Guo, founder of an AI-focused investment firm, put it plainly in a post on X. "Shipping before you fully understand what you've built isn't a character flaw. Today, it's how you compete."

Think about what that means.

The company that positions itself as the most safety-conscious AI lab in the world accidentally shipped its entire internal source code because a human forgot to remove a debug file from a build pipeline.

That's not a sophisticated attack. That's a packaging mistake.

And it's the kind of mistake that becomes more likely, not less, as AI speeds up the pace of software development.

What Anthropic Said

Anthropic's official response was short.

"No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."

That's technically accurate. Your data was not exposed. The AI models themselves were not exposed. Claude is not compromised.

But the response also does what corporate statements always do — it minimizes.

What was exposed was the full architecture of their most commercially important product. Claude Code alone has achieved an annualized recurring revenue of $2.5 billion, a figure that has more than doubled since the beginning of the year.

With competitors like OpenAI, Google, and xAI all racing to build their own coding tools, the leak hands every competitor a free engineering education on how to build a production-grade AI coding agent and what tools to focus on.

That's not nothing. That's a significant competitive disadvantage that cannot be taken back.

The Aftermath

The political world noticed too.

A congressman has since pressed Anthropic on source code leaks and safety protocols, with the letter highlighting the growing pressure on AI companies from Washington as their tools become embedded in defense and intelligence operations.

That's a new kind of problem for Anthropic. Not just a PR issue. Not just a competitive issue. Now a regulatory and political one.

And it comes at a sensitive time. Anthropic is preparing for an IPO. Enterprise contracts account for 80 percent of their revenue. The companies spending millions a year on Claude need to trust that the people building it have their operational house in order.

Three major mistakes in one week does not help that case.

What You Should Actually Take From This

A few things worth sitting with.

The features are real. The roadmap you've been reading about in this newsletter — the persistent background agent, the improved memory, the phone to desktop control — all of it is already built and sitting behind feature flags. Anthropic is not making promises they haven't delivered on internally. That's genuinely reassuring.

Your data is fine. Anthropic was clear and credible on this. No customer data. No credentials. No model weights. The leak was the tool's architecture, not your information.

But the pace of AI development creates new kinds of risk. When code gets shipped this fast, by both humans and AI agents, the window for catching mistakes before they go live gets smaller. This won't be the last incident like this in the industry. It's worth paying attention to how the companies you rely on handle it when it happens.

And Anthropic's response — while technically accurate — was slow, reactive, and followed by a second mistake in the takedown process. For a company that sells trust as part of its product, that's a gap worth watching.

One More Thing

Buried inside the leaked source code was a feature someone at Anthropic has clearly been having fun building.

A digital pet. A Tamagotchi-style companion with a name, a personality, sprite animations, and a floating heart effect. The planned rollout window in the source code was April 1st through 7th.

Which means if none of this had leaked, you might have woken up on April Fool's Day to a surprise companion inside your Claude Code.

Instead you got 41,500 GitHub forks and a congressional inquiry.

Rough week.

The Claude Code guide is still coming Monday. None of this changes what the tool can do or how to use it well.

But I wanted to make sure you had the full picture of what happened before we get back into the setup guides.

Reply and tell me what you make of all this. Are you less confident in Anthropic after this week or does it feel like the kind of mistake any fast-moving company could make?

I read every reply.

— Prompt Guy

P.S. The Musk vs. OpenAI trial starts April 27th. Three weeks away. Coverage is coming.