Aug 10 (edited) • AI Hacks/ Tools
Prompting - Best Practices
These are practical, high-impact tips to make your prompting sharper, more efficient, and more future-proof. Each point calls out a common blind spot and gives you an actionable step you can apply immediately. They come from hands-on use across multiple models — not theory — so you can avoid wasted time, confusion, and frustration.
Save this list, revisit it, and watch your results improve over time.
  • Break prompts into reusable modules when they're quite big: short, focused blocks you can swap or rearrange without rewriting everything.
  • Use flexible wording, avoid model-specific quirks, and test prompts on multiple models while keeping a fallback version.
  • Explore prompts that instruct the AI to improve or analyze other prompts, building a meta-level skill set.
  • Record the model version in your notes and maintain a versioned prompt library for each model.
  • Review at least the sections on prompt formatting, capabilities, and limitations to uncover hidden features.
  • Decide the exact format you want (table, markdown, JSON) before prompting to ensure clean, reusable results.
  • Refine in steps, asking the AI to improve its own output until it meets your target standard.
  • Split large tasks into separate threads or requests to keep each focused and avoid token overload.
  • Skim the docs to learn about functions, system prompts, and structured outputs — even if you don’t code.
  • Complete at least one learning path to gain tested best practices straight from OpenAI.
  • Search for prompts and solutions others have shared to shortcut your own problem-solving.
  • When asking for edits or improvements, also request an explanation in a table showing why, what, and how.
  • Turn specific answers into generalized templates or frameworks so they work in multiple contexts.
  • Run edge cases, tricky inputs, and unusual instructions to discover weaknesses early.
  • Add “Explain your reasoning step-by-step” to spot logic gaps and strengthen your results.
Bonus Tip:
You can even use GitHub or another version control system to track changes over time for your prompts and maintain a well-organized versioned prompt library.
Resources:
EDIT: And, if you wanna test your prompts like 4o vs 5:
Just go to https://gptblindvoting.vercel.app/ (green button, as framed in the pic below)
P.S: The right button is to do an anonmyized test without showing model names it's using so you can learn about which model you would choose the most.
6
3 comments
Holger Morlok
6
Prompting - Best Practices
Digital Roadmap AI Academy
skool.com/digitalroadmapaiacademy
Teaching Coaches and Entrepreneurs how to 10x their leads, scale to $6+ Figures, become irreplaceable w/ AI-Powered Marketing & Content Strategies.
Leaderboard (30-day)
Powered by