Sponsored by

Reading Time: 5 minutes

Hey Prompt Lover,

We're on the last day of this series and I want to make this genuinely useful.

Not just interesting to read.

Actually useful in your next AI session.

So let me tell you what the research actually says to do — and then I'll tell you what I think about it from using Claude every single day.

First, let's settle something.

A lot of people read Day 1 and Day 2 and immediately thought: okay, I need to be a Cyborg all the time.

That's not what the research says.

The Centaurs in this study — the people who used AI selectively and did most of the thinking themselves — they actually had the highest accuracy on the task.

They got the right answer more consistently than anyone else.

They also deepened their domain expertise the most.

The Cyborgs got better at working with AI.

They became more fluent at prompting, more comfortable pushing back, better at getting AI to do harder things. They maintained their expertise while building new AI-related skills.

The Self-Automators got faster. And not much else.

So the question isn't which type is best. The question is: what do you need right now?

Here's the framework I'd actually use.

Replace your first 4 hires with AI. Free workshop on April 8th.

Most early-stage founders can't afford their first four hires. Sales, marketing, dev, and support alone can run hundreds of thousands in salaries.

On April 8th, AI thought leader Heather Murray shows pre-seed and seed founders how to build all four functions using AI tools. Live, with demos, for free.

Register today and get a free AI tech stack worth $5K+ including Claude, AWS credits, Make, and 90% off HubSpot.

If you're in a domain you want to get sharper in — be a Centaur.

Use AI for the parts that would slow you down, not the parts that would teach you something. Ask it for formulas. Ask it for examples. Ask it for context you don't have. Then do the work yourself.

One consultant in the study learned more about retail strategy in one session using AI this way than they could have from hours of reading. But they did the analysis themselves. They made the recommendation themselves. They argued for it themselves.

That's the skill building.

If you hand all of that to AI, you finish faster but you don't get smarter. And the next time you face the same kind of problem, you're starting from the same place you started today.

If you're working on something where quality matters and you need AI to go deeper — be a Cyborg.

Don't accept the first answer. Push back on it. Add information it doesn't have. Ask it to explain its reasoning. Point out where it's wrong.

The people in the study who did this produced better outputs than the people who just took whatever came back first. And they got better at AI as they went.

The specific behaviours that made Cyborgs effective were simple:

Ask AI to check its own work after it gives you an answer.

When it gives you a recommendation, ask why. Then ask what would change that.

When you disagree, say so out loud. Tell it why. Ask it to reconsider.

Add new information after the first output and ask if that changes anything.

These aren't complicated prompting techniques. They're just what good collaboration looks like.

If you have a task that's genuinely routine, low stakes, and just needs to get done — it's okay to automate it.

This is the part the researchers were careful about. Self-Automation isn't always wrong. For tasks that are well within what AI can handle reliably, for things that don't require your judgment, for output that you'd review carefully anyway — handing it off is fine.

The problem is when you start doing this for work that does require your judgment. When the task is outside what AI handles well, and you hand it over anyway and do a quick sanity check at the end.

That's where the wrong answers come from.

That's also where the skill erosion happens without you noticing.

Here's the thing I'd add from using Claude every day.

The gap between a Centaur and a Cyborg is mostly about how much you're willing to stay in the conversation.

Centaurs are comfortable. They know their work. They know what they need. They use AI for specific things and keep the rest to themselves.

Cyborgs are willing to be uncomfortable. They let the conversation go in directions they didn't plan. They're willing to be wrong in front of the AI and correct themselves. They're willing to disagree and get pushed back on and then change their mind.

That discomfort is where the learning lives.

And here's why I think Claude specifically rewards this.

Claude doesn't just give you an answer and wait. When you push back with reasoning, it actually engages with the reasoning. When you point out a contradiction, it doesn't just agree with you — it thinks about whether you're right.

That back and forth is where the real work happens.

The people in the study who got the most out of AI were the ones who treated it like a thinking partner who needed to be challenged, not a tool that needed to be instructed.

That shift in how you think about it changes everything.

One thing I want to leave you with from this research that nobody talks about enough.

The researchers found that the people who handed everything to AI and did a quick sanity check at the end — they thought they were being productive. They genuinely felt good about it. Fast output. Clean memo. Done.

They didn't know they'd gotten it wrong until someone checked.

And they had no way to know, because they hadn't done enough of the thinking themselves to catch it.

That's the real danger of Self-Automation. It doesn't feel like a problem until it is one.

The Cyborg who pushed back four times in one session might have felt like they were going slow. They actually had a better answer and a deeper understanding of why.

Speed isn't the metric. Quality of thinking is the metric.

That's the series.

Three days. One research paper. Three types. One question that matters.

Which one are you being — and is that the right choice for the work in front of you?

Now here's what's coming next.

The Claude guides start April 2nd.

First one is Cowork. Full setup from scratch. The three files you need, the folder structure, the global instructions, the AskUserQuestion feature that changes everything about how you work with it. Real walkthrough, not surface level.

After that is Claude Code. Then the phone to laptop setup — how to message your desktop from anywhere and come back to finished work.

These are proper guides. Everything I've learned from using this stuff daily.

If there's something specific you want covered in the Cowork guide, reply now and tell me. The most requested things go in first.

And if you have someone in your world who's using AI for work and wondering why their results are inconsistent — send them this series. Day 1 through 3.

It might change how they work.

See you April 2nd.

— Prompt Guy

P.S. The Musk vs. OpenAI trial starts April 27th. That coverage is coming too. There's a lot happening right now and none of it is boring.