The Prompt Engineering Advice Everyone Repeats, But Almost Nobody Understands
The Prompt Engineering Advice Everyone Repeats, But Almost Nobody Understands
"Just use chain-of-thought."
Fair advice.
But incomplete.
Most people tell the model:
“Think step by step.”
Then wonder why the output still feels shallow, generic, or confidently wrong.
Because reasoning without structure often becomes performance.
Not thinking.
The Real Difference
The highest-performing prompts I tested didn’t ask for more intelligence.
They reduced ambiguity.
That’s the game.
The model already knows a lot.
Your job is guiding attention.
Compare these two:
Weak
“Think step by step.”
Strong
Before answering:
<observation>
What do I know for certain?
</observation><hypothesis>
What is my best current explanation?
</hypothesis><test>
What would prove this wrong?
</test><conclusion>
What answer survives scrutiny?
</conclusion>One creates noise.
The other creates process.
The Mistake Most Prompt Engineers Make
They optimise prompts like writers.
The best prompt engineers optimise prompts like system designers.
That changes everything.
Instead of:
“How do I make this sound smarter?”
They ask:
“Where can the model fail?”
That’s why anti-goals work so well.
Not just:
“Be an expert strategist.”
But:
“Do NOT give generic business advice.”
“Do NOT rewrite the user’s tone.”
“Do NOT optimise for politeness over accuracy.”
Constraints sharpen intelligence.
The Most Overhyped Thing in AI Right Now
Mega-prompts.
The internet loves giant “ultimate prompts” with 4,000 tokens of instructions.
But after months of testing across GPT-4, Claude, and Gemini:
Smaller chained prompts won almost every time.
Why?
Because attention is finite.
A model handling:
tone,
format,
strategy,
reasoning,
examples,
context,
style,
constraints,
and output rules...
all in one shot?
That’s cognitive overload, even for AI.
The better approach:
Step 1, extract.
Step 2, analyse.
Step 3, refine.
Step 4, generate.
Pipelines outperform monoliths.
Almost always.
A Story I Can’t Stop Thinking About
A friend sent me a “master prompt” once.
5 pages long.
Colour-coded.
Looked impressive.
He said:
“This thing does everything.”
I asked him one question:
“If it does everything... how does the model know what matters most?”
Silence.
That’s when it clicked for both of us:
Most bad prompting is instruction hoarding.
People keep adding layers because they don’t trust precision.
But prompting isn’t about saying more.
It’s about reducing uncertainty with surgical clarity.
And honestly?
That might also be true far beyond AI.
7
3 comments
Eugene Phillips
6
The Prompt Engineering Advice Everyone Repeats, But Almost Nobody Understands
The AI Advantage
skool.com/the-ai-advantage
Founded by Tony Robbins, Dean Graziosi & Igor Pogany - AI Advantage is your go-to hub to simplify AI and confidently unlock real & repeatable results
Leaderboard (30-day)
Powered by