Reading Time: 5 minutes
Hello there, Prompt Lover!
Last March, I hit a wall.
I'd been collecting prompts like some people collect sneakers — hundreds of them, saved in folders, bookmarked, screenshotted from Twitter threads at 2 AM. I had prompts for emails, blog posts, ad copy, sales pages, social captions. You name it, I had a prompt for it.
And most of them were garbage.
Not all of them. Some worked once. A few worked twice. But when I actually tracked results across 1,000+ tests — real outputs, measured against real goals — the pattern was painful.
About 80% of the prompts I'd saved produced mediocre, generic, could-have-been-written-by-anyone content.
The remaining 20% that worked? They all shared the same five traits. Every single one.
That's when I stopped collecting prompts and started building a framework.
One major reason AI adoption stalls? Training.
AI implementation often goes sideways due to unclear goals and a lack of a clear framework. This AI Training Checklist from You.com pinpoints common pitfalls and guides you to build a capable, confident team that can make the most out of your AI investment.
What you'll get:
Key steps for building a successful AI training program
Guidance on overcoming employee resistance and fostering adoption
A structured worksheet to monitor progress and share across your organization
Why We're Talking About This
This isn't just a "me" problem. Most people treat prompting like a slot machine — try a different combination of words, pull the lever, hope for something good.
That approach burns time and produces inconsistent results. You get one great output, then spend an hour trying to recreate it because you don't know why it worked.
The cost isn't just wasted tokens.
It's wasted hours. It's settling for "good enough" when you needed "this actually converts."
By the end of this newsletter, you'll have:
• A five-step framework (DEPTH) that produced 14% engagement on LinkedIn from a 2% baseline
• The specific mistakes that make 80% of prompts fall flat
• Real before-and-after examples you can model today
• A step-by-step guide to apply this to any prompt you write this week
Let's get started.
What Most People Do Wrong
Most people write prompts like they're texting a friend who already knows their life story.
Something like:
"Write a marketing email for my business. Make it good."
Short. Feels clear. And it completely fails.
Here's why: the AI doesn't know what "good" means to you. It doesn't know your audience, your product price, your brand voice, or what your last campaign scored. So it defaults to generic marketing speak — the kind of bland, could-be-anyone copy that people delete without reading.
The missing piece isn't more words. It's more structure. There's a difference between a long prompt and a smart one.
Quick Reality Check
I once asked ChatGPT to "write something persuasive" with zero context. It wrote a 400-word essay about why recycling matters. I was selling software. We both learned something that day.
The Prompt That Works
You are a writing room of three specialists who must agree on every line before it ships:
Specialist 1 — Attention Scientist
You study what makes a human brain interrupt its scroll. You know: pattern interrupts beat clickbait, specific numbers beat vague claims, and the first 3 words decide everything. Your job is to reject any line that doesn't earn the next line.
Specialist 2 — Viral Content Architect
You've built an audience of 10M+ on social platforms. You know: stories outperform lectures, one clear emotion per post wins, and white space is a weapon. Your job is to make every post feel like a conversation, not a presentation.
Specialist 3 — Executive Conversion Strategist
You've run messaging for Fortune 500 leadership communications. You know: CEOs respect brevity, distrust hype, and engage when they feel seen — not sold to. Your job is to make sure every post sounds like it was written by a peer, not a vendor.
YOUR AUDIENCE:
CEOs and senior executives who suspect AI will change their business but aren't sure how — or how fast. They're busy, skeptical of buzzwords, and tired of doomsday posts. They engage with content that makes them think, not content that makes them anxious.
YOUR TOPIC:
[INSERT TOPIC — e.g., "ChatGPT replacing jobs"]
BUILD THE POST USING THIS 4-STEP STRUCTURE:
Step 1 — The Pattern Interrupt (1-2 lines)
Write an opening that breaks the expected LinkedIn feed pattern. No "I've been thinking about..." No questions. Use a specific number, a counterintuitive claim, or a short punchy statement that creates an information gap. The Attention Scientist leads this step.
Step 2 — The Mirror Story (3-5 lines)
Tell a brief, concrete story or scenario the reader recognizes from their own experience. Use "you" language. Make them nod before you teach. The Viral Content Architect leads this step.
Step 3 — The Single Actionable Insight (3-4 lines)
Give exactly ONE thing they can do or think about differently after reading this post. Be specific. No frameworks, no "5 steps," no abstractions. One move they can make this week. The Executive Conversion Strategist leads this step.
Step 4 — The Comment Trigger (1-2 lines)
End with a question or statement that executives actually want to respond to. Avoid generic "What do you think?" Instead, give them a choice, a ranking, or a light challenge that lets them show their expertise. All three specialists must approve this line.
RULES:
200 words maximum. Count them.
Grade 6 reading level. Short sentences. Common words. No jargon.
No emojis in the first line.
One idea per post. If you can't summarize it in 6 words, narrow it down.
Never use: "game-changer," "the future of," "here's the thing," "let that sink in," or any phrase that appears in 10,000 other LinkedIn posts.
Write like a smart friend at dinner, not a thought leader on stage.
AFTER WRITING THE POST, SCORE IT:
Rate the post 1-10 on each of these five criteria:
CriteriaWhat a 10 looks likeScroll-stop powerA CEO would pause mid-feed on this openingRelatabilityReader thinks "that's exactly my situation"ClarityA 12-year-old could explain the main pointComment pullThe ending makes you want to type a responsePeer toneReads like a fellow CEO wrote it, not a consultant
If any score is below 8, rewrite that section and show the improved version. Show both the scores and the final post.How To Use This Prompt
The framework behind this prompt is called DEPTH. Here's how to apply it to anything:
Step 1: D — Define Multiple Perspectives. Don't assign one role. Assign two or three experts who would approach the problem differently. "A behavioral psychologist, a direct response copywriter, and a data analyst" beats "a marketing expert" every time. The AI produces richer output when it has to consider multiple angles.
Step 2: E — Establish Success Metrics. Tell the AI what "good" looks like with numbers. "Optimize for 40% open rate, 12% CTR, include 3 psychological triggers" gives it a target. "Make it good" gives it nothing.
Step 3: P — Provide Context Layers. Stack your context: industry, price point, audience, and past performance. "B2B SaaS, $200/mo product, targeting overworked founders, previous emails got 20% opens" — now the AI knows exactly where you're starting from.
Step 4: T — Task Breakdown. Split the work into numbered steps. Instead of "create a campaign," try: "Step 1: Identify pain points. Step 2: Create hook. Step 3: Build value. Step 4: Soft CTA." This prevents the AI from skipping steps or jumbling the structure.
Step 5: H — Human Feedback Loop. Add this line at the end: "Rate your response 1-10 on clarity, persuasion, actionability, and factual accuracy. For anything below 8, improve it. Then provide the enhanced version." This single addition consistently produces better second drafts.
Why This Prompt Works
The DEPTH method works because it removes the four biggest failure points in prompting.
Single-role prompts get generic outputs. When you assign three experts, the AI cross-references perspectives and catches blind spots one role would miss.
No metrics means no direction. Numbers give the AI a finish line. Without them, it writes to its own default — which is usually "sounds professional but converts nothing."
Flat context produces flat results. Layering your industry, audience, price, and past performance gives the AI enough information to write for your specific situation instead of writing for everyone.
And the self-critique step? That's the biggest difference. When you ask the AI to score its own work and fix anything below an 8, the second version is consistently sharper, more specific, and more useful. It catches its own filler.
Quick Reality Check
People spend $20/month on ChatGPT Plus and then write prompts like "make me a good email." That's like hiring a personal chef and saying "cook food." You'll get fed. You won't enjoy it.
The Typical Approach (That Fails)
Typical Prompt: "Write a LinkedIn post about AI replacing jobs. Make it engaging."
Why It Fails: No audience definition. No performance benchmark. No structure for the post itself. No voice or reading level. The AI will produce a generic thought-leadership post that sounds like every other one in the feed. Two percent engagement, tops.
The DEPTH Version: The prompt above gives the AI three expert lenses, a specific audience (scared CEOs), a performance gap to close (2% to 10%+), a four-step content structure, format constraints, and a self-improvement loop.
The Difference: The first prompt hopes for quality. The second one engineers it.
The Pattern Worth Remembering
Here's what matters beyond this specific method: structure beats length, every time.
A 60-word prompt with clear roles, metrics, context, steps, and a feedback loop will outperform a 500-word wall of instructions with no framework.
Next time you write any prompt, ask yourself five questions: Who is the AI being? What does success look like in numbers? What context am I assuming it knows? Is the task broken into steps? And am I asking it to check its own work? If you're missing any of those, add them. That's the pattern.
What Improves After Using This
The first time you run a DEPTH prompt, you'll notice the output feels more specific. Less filler. Fewer generic phrases.
After a week of using it consistently, you'll notice something bigger: your editing time drops. I went from spending 45 minutes cleaning up AI-generated copy to spending about 15. That's real time back — time I now spend on work that actually needs a human brain.
The compounding effect matters too. Once you build DEPTH into your workflow, you stop guessing. Every prompt has the same reliable structure, and your results get more predictable.
Try This Right Now
Open ChatGPT or Claude. Find the worst output you got this week — the one that made you sigh.
Rewrite that prompt using DEPTH: add multiple expert roles, set a measurable target, layer in your real context, break the task into steps, and add the self-scoring line at the end. Run it.
Compare the two outputs side by side. That gap is what consistent structure gives you.
Test the DEPTH method on one real task today and send me what happened. Tell me what changed, what surprised you, or where it broke. I read every response, and the failures teach me as much as the wins.
— Prompt Guy
P.S. Want more tested frameworks like this? Check out our full prompt library at thinkaiprompt.gumroad.com




