Here's how I distilled twelve key strategy and technology classics into a concise, practice-ready playbook - and how you can replicate this approach for any field.
1 | What the Deep‑Research Prompt Does (and Why You Might Care)
This section outlines precisely what the "Masterprompt" achieves and its relevance to learners who want immediate practical outcomes. If you're primarily interested in how such prompts are constructed, you may prefer jumping directly to section 2.
When you paste the following prompt into an advanced LLM, you receive a phase‑by‑phase learning playbook that distills the key ideas, frameworks, and exercises from twelve classic books on strategy, entrepreneurship, and software architecture.
Why it matters
Speed: Condense months of reading into a manageable roadmap.
Structure: Clearly sequenced knowledge with built-in cognitive ease.
Adaptability: Easily modified for any specialized domain.
To experience this immediately, copy and experiment with the prompt below using your most capable available model (ideally a deep-research optimized one). If you're interested in the logic and process behind crafting these deep prompts, continue reading.
"
You are a world‑class expert mentor in strategic thinking, entrepreneurship, and software system design.
Your task is to produce a comprehensive, phase‑by‑phase learning guide that delivers the same insights, frameworks, and actionable exercises as if the learner had read and applied these twelve books:
1. "The Decision Book (fully revised)"
2. "Good Strategy Bad Strategy"
3. "The Lean Startup"
4. "Disciplined Entrepreneurship"
5. "The Mom Test"
6. "Venture Meets Mission"
7. "Grokking Algorithms"
8. "Clean Architecture"
9. "Architecture Patterns with Python"
10. "The DevOps Handbook"
11. "The Six Disciplines of Strategic Thinking"
12. "The Long Game"
Organize your output into four sequential phases — each with:
Core Concepts & Frameworks: concise summaries of the key models and principles from the relevant books.
Step‑by‑Step Application Exercises: concrete tasks or mini‑projects to internalize each framework (e.g., decision‑matrix creation, customer‑interview scripts, pipeline‑design diagrams).
Real‑World Examples: brief case studies or analogies drawn from startups, engineering teams, or product launches—especially relevant to AI‑driven training apps or data‑intensive systems.
Reflection Questions: prompts to check understanding and to adapt the ideas to the learner’s own venture or codebase.
Further Reading & Resources: links to key chapters, articles, or templates for immediate use.
Phase 1: Decision & Strategy Foundations
• The 50 decision‑models from The Decision Book
• Diagnosis of “good vs. bad” strategic kernels from Good Strategy Bad Strategy
Phase 2: The Entrepreneurial Playbook
• Build‑Measure‑Learn loops and MVP design from The Lean Startup
• The 24‑step launch process in Disciplined Entrepreneurship
• Customer‑validation scripts from The Mom Test
• Mission‑driven venture alignment from Venture Meets Mission
Phase 3: Technical & Pipeline Mastery
• Algorithmic thinking foundations (big‑O, greedy, recursion) from Grokking Algorithms
• High‑level architecture principles from Clean Architecture
• Hands‑on Python patterns for domain‑driven design in Architecture Patterns with Python
• DevOps practices and CI/CD pipeline blueprints from The DevOps Handbook
Phase 4: Embedding Long‑Term Strategic Discipline
• Habit‑forming strategic routines from The Six Disciplines of Strategic Thinking
• Patience, compounding results, and “play the long game” mindsets from The Long Game
Deliver a unified learning playbook that a motivated learner can follow—complete with timelines (e.g. 1–2 weeks per book), templates (decision‑matrices, interview scripts, pipeline diagrams), and self‑assessment checklists—so they emerge as a strategic thinker, start‑up founder, and system‑architect without having to read each book cover‑to‑cover.
"
2 | Under the Hood: 3‑Stage Prompt Stack
This section transparently illustrates how I created the above "Masterprompt." The intent is to empower you to replicate this workflow effectively for topics of your choice. If you're experienced with AI workflows, feel free to skim to section 3 for deeper insights.
Chaining small prompts yields clearer, more actionable outputs than asking the model one giant question. Below is the 3-stage method I follow every time I want to learn a new field quickly. It’s surprisingly simple and consists of only 3 prompts! I’ve included the three prompts that led me to the above “Masterprompt” as examples; feel free to adapt them.
Stage 1 — Source
In this stage, you first define what you want to learn and then ask the model for books to help you do that. The topics I wanted to study were strategic thinking, entrepreneurship, and developer thinking. Below you can see the exact prompt I used in this stage.
"
I want to learn strategic thinking, entrepreneurship, and developer thinking (technical things about pipelines how to structure them etc). Based on all you know about me could you give me a list of books to help me expand my mind (with respect to the goals listed above)?
"
Stage 2 — Sequence
In this stage, you leverage the knowledge your model of choice has about your learning habits to map the books it has just come up with to an (for you) optimal learning curve. Again, it’s really simple!
Here is the prompt I used:
"
Based on all you know about me what order should I read the 12 books you orginally suggested in to facilitate the optimal learning curve?
"
Stage 3 — Synthesize
In this stage, you combine the outputs of the two previous prompts into a single deep‑research prompt that generates a self‑contained learning playbook. To achieve this, you simply ask the model to do exactly that, while reminding it to base its response on things relevant to you.
Here is the prompt I used:
"
Thanks! Now based on the content of all these books, my learning goals, and your knowledge of me, could you generate an optimal deep research prompt with the goal of generating an ouput reading which is equivalent to reading all the books you recomended and going through the phases of your roadmap.
"
3 | My Workflow & AI‑Interaction Principles
This section won’t be as structured as the previous ones. It’s intended for those interested in the thought processes and insights I've gathered through consistent, daily interaction with AI. My goal is simply to share what I've learned, efficiently and for free.
3.1 | Which AI should I use?
This is by far the most common question I receive, and unfortunately, the answer is complex. In the following, I will share my honest, current perspective, structured in two parts: first, a general overview, then a specific example based on the 3-stage prompt stack method.
.1 | General answer
As mentioned above, there is no easy answer, and it heavily depends on your preferences and use cases. It’s also constantly changing, so if you read this in the far(ish) future, take the following with a grain of salt.
Let me begin with a blunt but necessary clarification: if you aren't familiar with the distinction between reasoning-intensive prompts and simpler queries, the specific AI model you choose doesn't significantly matter. Currently, I recommend Claude for intuitive understanding and intent inference, but this recommendation could easily shift in a matter of weeks. To restate my point explicitly: if you're unclear about what tasks require sophisticated reasoning capabilities, you might as well choose your AI provider based on the UI.
Alright, now to the hard part of the question. What is the best model? This will change quickly and often, but right now (April 22, 2025), I am happy to be able to give a definitive answer. It’s OpenAIs o3. There are three rough categories I (personally) like to rank models in. Reasoning capability, tool use (like web search but integrated), and natural language understanding, including intent inference. Presently, o3 leads decisively in all categories. No available alternative surpasses it in any meaningful way.
Most of you can probably tell I took the easy way out with that answer. This recommendation assumes cost isn't a factor, which unfortunately isn't realistic for most of us. o3 is notably expensive: the OpenAI Plus plan ($20/month) limits you to 50 o3 prompts per week; the API cost structure is also high ($10 per million input tokens, $2.50 per million cached input tokens, and $40 per million output tokens—roughly one token per word April 22, 2025). Unlimited access requires the pro plan at $200 per month.
Despite the high price, I don't consider o3 overpriced. Its capabilities are extraordinary—unthinkable just weeks ago. Right now, no model maker can even remotely compete with o3, so the 50 weekly prompts in the Plus plan bring unmatched value. While cheaper or free alternatives (such as upcoming models from Chinese providers like DeepSeek) will close this gap, for now, strategically maximizing your 50 weekly o3 prompts is the most pragmatic approach.
.2 | use case specific
I will not be able to give a full answer (one for every use case). If you are interested in that, check out Nate’s take on that (nate jones on substack its free); he does a great job explaining.
Instead, I'll focus on a specific scenario relevant to many: users subscribing to the OpenAI Plus plan only. This plan includes:
50 daily queries to o4-mini-high
50 weekly queries to o3
10 monthly queries to the deep-research model (Either o4 or a version of o3 with more compute)
The key challenge, then, is optimizing these limited resources. Below, I'll illustrate my approach using the 3-Stage Prompt Stack as a concrete example.
Because this is a complicated topic, I’ll start with high-level concepts and work my way into the details. To frame this clearly, consider the principle: the power of an AI model inversely correlates with the quantity of queries it allows. This is the case in the OpenAI Plus plan, but applies generally to all subscriptions (except the Chinese, which are generally free, which is why I recommend Chinese models (deep seek) to everyone with a budget of 0$).
Thus, you should always begin with less capable (cheaper, less restricted) models and escalate only when necessary. With experience, you'll become adept at intuitively recognizing which tasks warrant more powerful AI.
Another foundational principle guiding my AI interactions:
The amount of effort in is proportional to the quality of the output you receive
Yet, no matter how meticulous your manual prompts, the capabilities of top-tier models like o3 or deep research far exceed what's practically achievable through human-written inputs alone. Thus, I strongly advise leveraging AI itself to craft prompts for these powerful models. This simple strategy can significantly amplify your outcomes.
Currently, my preferred model for prompt generation is Anthropic’s Claude 3.7 Sonnet, though its daily free query allowance fluctuates based on demand. The OpenAI models tailor themselves to you based on your interactions (this is not specific to OpenAI models, but works better the more you use any given model(maker)). Thus, right now (with the OpenAI plus subscription offering what it does (o3)), I recommend using o4-mini-high for prompt generation.
An important caveat for European users: memory features currently aren't supported for o3 or o4-mini-high in Europe. Of course, that's annoying, but there’s a neat way around it. Simply avoid using phrases such as "based on all you know about me." Instead, first request a personalized summary from GPT-4o, then explicitly reference that summary in subsequent prompts.
Returning to the 3-Stage Prompt Stack example, my objective was comprehensive learning in strategic thinking, entrepreneurship, and technical pipeline structuring. Knowing deep research’s capabilities, I aimed to leverage it for this task. I knew that 4o knew a lot about how I tackle problems, the way I think, and the way I learn. I also knew that I wanted that reflected in the deep research prompt. My 4o usage is unlimited, so I didn’t have anything to lose by just asking for the knowledge I was seeking. I knew it couldn’t give me that knowledge directly, but with the search function, it knew where to find it - Books.
Putting all that together, I arrived at the following two prompts:
"
I want to learn strategic thinking, entrepreneurship, and developer thinking (technical things about pipelines how to structure them etc). Based on all you know about me could you give me a list of books to help me expand my mind (with respect to the goals listed above)?
"
"
Based on all you know about me what order should I read the 12 books you orginally suggested in to facilitate the optimal learning curve?
"
Recognizing the impracticality of thoroughly reading all recommended books, I employed deep research to distill and organize the essential insights. Choosing o4-mini-high to generate this deep-research prompt was straightforward: it combines strong reasoning with sufficient daily query limits. Thus, I completed the stack with:
"
Thanks! Now based on the content of all these books, my learning goals, and your knowledge of me, could you generate an optimal deep research prompt with the goal of generating an ouput reading which is equivalent to reading all the books you recomended and going through the phases of your roadmap.
"
A closing note on model usage: sophisticated reasoning-focused models (e.g., o4-mini-high) typically excel when provided a clear problem statement rather than overly prescriptive instructions. However, if your task demands specific logical steps, clearly outlining these in your prompt often improves results.
In contrast, standard language models consistently perform better with highly detailed context. Both scenarios underscore the importance of tailoring your prompt-writing strategy carefully to the capabilities of your chosen AI model. I'll elaborate further on this topic in future posts.
3.2 | Operating Principles – Core Rules Guiding Each AI Interaction
My interactions with AI consistently adhere to several fundamental principles, each intended to optimize clarity, reliability, and effectiveness:
Outcome first: Clearly specifying the desired outcome first ensures focused, precise outputs. For example: “Provide a brief (300 words max) comparison of methodologies A and B regarding efficiency, scalability, and risk.”
Single variable per prompt: Isolating one key variable per prompt simplifies debugging and helps pinpoint exactly where issues arise. This is why I prefer the staged approach - each prompt building logically on the last.
Frameworks over facts: Prioritizing understanding patterns or frameworks is crucial because underlying principles remain valuable long after specific facts become outdated. I often explicitly ask the AI, “What pattern or principle underlies this idea?”
Transparent reasoning: Requesting explicit reasoning steps from the AI surfaces assumptions and helps avoid hidden errors. A simple but effective prompt snipped you can include in your reasoning prompts: “Provide your reasoning explicitly step-by-step, then summarize your final conclusion separately.”
3.3 | Prompt-Engineering Patterns I Regularly Use
Several prompt structures consistently yield reliable outcomes:
Declarative to procedural framing: Clearly stating goals, constraints, and procedural steps upfront ensures systematic output. Example: “Create a 90-day learning plan for skill X, constrained by 7 hours/week and free resources. Outline the units, calendar mapping, and review checkpoints clearly.”
Role-specific perspective: Assigning the AI a specific role and perspective encourages targeted depth and vocabulary. Example: “You are a CTO advising a non-technical founder on scaling software systems.”
Controlled viewpoint: Restricting response length or format to essential elements prevents irrelevant elaboration. Example: “Respond in under 200 words,” or “List bullet points only, no additional context.”
Iterative self-critique: Prompting the AI to critique and revise its own output enhances quality dramatically. (Simple)Example: “You are a world-class strategist. Meticulously analyze the strategy we derived with the goal of finding weaknesses or inconsistencies. List them in descending order of importance.”
Cross-validation with multiple models: Alternating between different models or systems for creation and critique significantly reduces blind spots. I often give a prompt to multiple strong models and then use a prompt snippet like “Combining the strengths of each of the approaches from additional context to achieve an optimal output.” This is a very simplified version, and the actual snippet you use has to be tailored to the task. If you guys are interested, I can make an entire post on this topic.
Prompt structure: Again, this varies drastically, but most of my performing prompts follow the rough structure of:
"
Clear goal description (1 to 2 scentences 3 max)
Outline of the prompt (so that the model knows which part of the prompt to send to which reasoning head) This could look something like:
For this I will provide you with:
1. Context
2. Precise & detailed goal description
3. Instructions (possible solution pathways)
3.1 High level instructions
3.2 Detailed instructions
3.3 Negative instructions (Problems you often ran into with similar prompts - here you tell the model what NOT to do)
4. Additional context
I then proceed with the prompt as structured in the outline. Be careful to include only one type of information in each paragraph/section. This way, the model can send the correct information to the correct heads directly and doesn’t have to filter noise. If, for example, you include context in the goal description, the model has to filter that out (as it can’t “think” of it in the same way it does about the goals). If you’re lucky, it recognizes that it’s important context and classifies it as such, but there is a significant chance of the context just getting lost. To clarify: every part of your prompt should have a purpose, and all of the information in each part should be relevant to only that purpose.
"
3.4 | Verification & Anti-Hallucination Strategies
Ensuring AI accuracy and avoiding common pitfalls demands specific safeguards:
Citation validation: Always request specific citations, randomly verifying several references manually to detect and prevent fabrication.
Demanding counter-examples: Requiring the AI to explicitly state situations where the provided advice might fail helps avoid over-generalizations.
Freshness checks: Directly prompting the AI to cross-check recent developments ensures advice remains current: “Review recent updates within the last year; flag anything altering your original recommendation.”
Executable verification: Any AI-generated code or mathematical reasoning is treated as pseudocode and verified through execution or rigorous external checks before acceptance.