In partnership with

Reading Time: 5 minutes

Hey Prompt Lover,

Yesterday I told you about the three types.

Today I'm showing you what they actually look like in practice — because reading the names is one thing, but seeing the behavior is different.

And when you see it described in detail, you're going to recognize yourself. Maybe uncomfortably so.

Let's start with the one most people think they are.

Get what you want from TV advertising

What you want from TV advertising: Full-screen, non-skippable ads on premium platforms.

What you get: "Your ad is on TV. Trust us."

Modern, performance-driven CTV gets your TV ads where you want with transparent placement, precision audience targeting, and measurable performance just like other digital channels.

TV doesn't have to be a black box anymore.

The Centaur.

This person opens AI with a specific job in mind.

They already have a direction.

They already have a view.

They're not asking AI to figure out the problem — they're asking it for something specific that helps them do what they've already decided to do.

In the BCG study, a consultant named Jiwoo Paik needed to calculate a growth rate. Instead of asking AI to run the numbers, they asked: "How do I calculate market size growth from 2013 to 2017?"

AI gave them the formula.

They took the formula. Opened Excel.

Did the calculation themselves.

Then they came back and asked: "What's the Excel formula for this?" Got it. Used it.

That's it. That was the interaction.

They needed a method, they got the method, they did the work.

Another consultant needed to understand how clothing retailers had handled similar problems. So they asked: "What are examples of companies that turned around a men's clothing brand after poor performance?"

AI gave examples. The consultant used that knowledge to sharpen their own thinking. Then they did the analysis themselves.

The Centaur uses AI like a very fast, very knowledgeable colleague you can ask a quick question to before getting back to work.

They're building their domain knowledge through AI. Getting sharper at their actual job.

What they're not doing is learning how to work with AI better. They keep it at arm's length. Deliberate. Selective. Controlled.

The Cyborg.

This person is in conversation with AI the entire time.

Not just asking questions. Not just getting help with one step.

Actually working through the problem together, back and forth, across the whole session.

Here's what that looks like in the study.

A consultant called Amara gets a recommendation from AI. Women's brand. AI has reasons. Sounds logical.

But Amara doesn't just accept it. She asks: "The menswear market grew 72% from 2013 to 2017. Does that change your recommendation?"

AI reconsiders. Shifts its view.

Another consultant called Hiro gets the same initial answer. Disagrees. Says: "I don't think I agree. The women's brand is already doing well. I'd focus on Kleding Kids instead."

AI adjusts. Writes a new recommendation.

Hiro reads it. Changes their mind again.

Goes back. "I've changed my mind. Kids is too small a share of the market. Go back to men's. Here's why."

AI rewrites again.

That back and forth — that's the whole thing. The Cyborg is never just accepting the output. They're always in it. Testing it. Correcting it. Steering it.

One person in the study described it perfectly. They said before, they'd ask AI a question, get an answer, and that was the starting point. After working this way, they realised they could keep pushing deeper, keep asking follow ups, keep challenging it. The AI got better. And so did they.

The Cyborg is learning how to work with AI. That's their skill development. They're becoming fluent in how to get AI to do better work.

The Self-Automator.

This is the one nobody wants to admit they are. But a full 27% of the study — more than one in four highly trained, highly motivated professionals — fell into this category.

Here's exactly what happened.

One consultant, Carlos, opened the session. Copied the full problem statement. Pasted in all the interview transcripts. Pasted in all the financial data. Then asked: "What brand should the CEO focus on? What's the rationale? What actions should they take? Write a 500 word memo."

One prompt.

AI produced everything.

Carlos read it. It seemed reasonable. He submitted it.

Another person in the study put it more honestly than anyone else. She said: "I was feeling lazy. I was like, how should I write 500 words now? I don't want to write 500 words."

And this is the important thing.

The outputs from Self-Automators looked fine. Polished. Professional. Structured.

They just got the answer wrong more often. Because AI, without anyone pushing back or adding context or questioning the logic, went with its best guess and nobody caught where that guess was off.

And the person who submitted it? They didn't really know why the answer was what it was. They couldn't defend it. They couldn't explain the reasoning. They just knew what AI told them.

Now here's the uncomfortable bit.

The same person can be all three types on the same day.

You might be a Centaur when you're writing something you care about.

A Cyborg when you're working on a problem you find interesting.

And a Self-Automator when it's 4pm on a Friday and someone needs something in an hour.

The researchers were clear about this. These aren't fixed personalities. They're patterns you fall into depending on the situation.

The question is: which one are you defaulting to?

Because the default is where the habit lives. And the habit is where the skill development happens — or doesn't.

Tomorrow I'm going to tell you which mode to use when.

Because the answer isn't just "always be a Cyborg." It's more nuanced than that — and more useful.

Claude guides drop April 2nd.

Cowork full setup. Claude Code. The phone to laptop workflow. All of it.

See you in the final issue tomorrow.

— Prompt Guy

P.S. The diary entry in the OpenAI trial keeps getting mentioned. The Musk deep dive is coming back too. A lot still to cover.