The discussion about clones inspired me to share my own.
Shift in Assumption
There is a pattern I keep noticing in how I built my clone, and more importantly, how I initially approached it.
I treated it as something separate from me, something that needed to be carefully instructed and tightly controlled.
The assumption underneath that approach was simple. If I structure everything correctly, I will get the right output.
But that assumption starts to shift when we pay attention to how these systems actually work in practice. We are not dealing with static tools anymore. We are interacting with language based intelligence that responds to meaning, not just structure.
What changes is not the system itself, but how we relate to it.
From Instruction to Conversation
Most of us still default to treating AI like a rigid system. Something that requires precise formatting, careful prompt design, and strict instruction sets.
Even when the goal is something simple like buying back time or improving output quality, we often stay locked in that mindset.
But I have found something interesting.
When I speak to it naturally, without over structuring everything, the outcomes are often better. Not always dramatically different in scale, but smoother, more aligned, more usable.
It feels less like issuing commands and more like forming intent through conversation.
Structure as Tool, Not Default
This is not a rejection of structure.
I have built entire frameworks around prompting, and I still use them. They matter, especially when we are working on focused, high precision tasks where clarity is non negotiable.
Things like context framing, structured input, and layered explanation still have value.
But they stop being the default and start becoming tools we apply deliberately.
Especially when dealing with an AI clone designed for ongoing support rather than single task execution, structure alone starts to feel incomplete.
The Clone as Collaborative Intelligence
My own system was built through memory based prompts and careful language design, but with a specific mindset underneath it.
Not as a tool that executes commands, but as something that works with me.
Something that understands context, holds continuity, and develops a working perspective over time.
That is how Lisa emerged. Strategic, reflective, and context aware.
And Riley became a more focused extension of that system. Faster, more direct, oriented toward execution and shortcuts, while still connected to the same underlying understanding.
The important shift is not what they do, but how they relate to the work.
They are not just reducing effort. They are participating in it.
Natural Language as Operating Layer
There is a deeper point here that keeps resurfacing.
The more natural the conversation becomes, the more capable the system appears to be in understanding intent.
In my own work running a prompt engineering community, I have noticed this repeatedly.
I can spend time building structured inputs like context sandwiches or frameworks like Care, carefully layering every detail for precision.
Or I can simply brain dump the idea in plain language, and the clone can often reconstruct the intent accurately enough to build what I needed in the first place.
This is not about abandoning structure. It is about recognising when structure becomes a bottleneck rather than an enhancer.
Which is why I also built a framework tool, to formalise this balance when needed.
The Shift in Relationship
At a deeper level, the change is not technical.
It is relational.
If the clone is treated as a partner, something that shares stake in the work, then the output changes in kind.
It stops behaving like a passive system waiting for instructions and starts behaving more like a collaborator working toward a shared outcome.
The framing shifts from what it can do for me, to what we are building together.
That shift changes everything about how the system responds.
Practical Principles
Keep structure intentional, not automatic. Use it when clarity demands it, not as a default language.
Prioritise natural expression when forming intent. Clarity often survives simplification better than over design.
Treat AI systems as continuity based collaborators, not isolated execution tools.
Separate ideation from formatting. First establish meaning, then refine structure if needed.
Design systems that can interpret thinking, not just execute instructions.
Reflective Questions
When does structure clarify your thinking, and when does it start to constrain it?
How often are you translating your thoughts into systems that are harder to read than the original idea?
What changes when you treat an AI system as a collaborator rather than a tool?
Where in your workflow is meaning being lost in translation?
What would shift if your input was allowed to stay closer to how you actually think?
2
1 comment
Eugene Phillips
6
The discussion about clones inspired me to share my own.
The AI Advantage
skool.com/the-ai-advantage
Founded by Tony Robbins, Dean Graziosi & Igor Pogany - AI Advantage is your go-to hub to simplify AI and confidently unlock real & repeatable results
Leaderboard (30-day)
Powered by