🌀AI Quirks — Why AI Sometimes Ignores Your First Instruction
✨ The AI Quirk:
You give AI a clear instruction at the start of a prompt… but the response seems to ignore it completely.
Even stranger, if you repeat the instruction later in the prompt, suddenly the AI follows it perfectly.
✨ What’s Going On:
  • Large language models weigh instructions "based on proximity and clarity" within the prompt.
  • Instructions buried early in a long message can lose influence once the model begins predicting the response.
  • The model often prioritizes "the most recent instruction signals" it sees.
  • If a prompt contains mixed signals (examples, context, and instructions together), the model may treat the first instruction as "background instead of a rule".
Example:
You start with:
1) Write this in bullet points.
2) Then provide a long paragraph of context.
The model may treat the context as the main task and default to paragraphs.
But if you end the prompt with:
“Use bullet points for the final answer”,
the output suddenly follows the rule.
✨ What To Do If You See It:
  • Place "critical instructions at the end of the prompt".
  • Separate instructions from context using spacing or labels.
  • Repeat important constraints when precision matters.
Try this prompt:
“Using the context above, produce the final answer in bullet points only.”
✨ Why This Happens:
AI isn’t reading instructions like a human would. It’s predicting the next most likely text — and "AI tends to pay the most attention to the instructions it sees last."
✨ AI Bits & Pieces — helping people and businesses adopt AI with confidence.
9
7 comments
Michael Wacht
7
🌀AI Quirks — Why AI Sometimes Ignores Your First Instruction
AI Bits and Pieces
skool.com/ai-bits-and-pieces
Build real-world AI fluency to confidently learn & apply Artificial Intelligence while navigating the common quirks and growing pains of people + AI.
Leaderboard (30-day)
Powered by