Most of us didn’t sit down and decide to “adopt AI”. Many of us stumbled across ChatGPT a few years ago, found how useful it was, and then started making images, videos with it to save time, and before we knew it we're writing full apps with it, and connecting it with zapier or make to save time, handle workflows and connecting it to send emails, schedule meetings, post things, and have it do all the things a trusted human PA would have done 20 years ago.
When I started writing my book, I've already gone from "you should look at using AI to save you time" to "You definitely need to use AI to stay ahead", and in my final edit "If you don't embrace AI tools now you wont have a business"
And honestly, I love it. 😍
AI tools are probably the biggest advantage I’ve ever seen for small business owners. You can move faster, build quicker, and do things that used to take teams of people. Coding apps that would have taken 3 developers 3 months can now be turned out in a day or two.
If you’re building something right now and you’re not using AI properly, you’re making life harder than it needs to be.
𝐁𝐮𝐭…
What if you had hired a bad actor PA 20 years ago? Or one with no loyalty or morals that could very easily influenced one by a bad person, or group. What damaged could they cause to your business?
𝐓𝐡𝐢𝐬 𝐢𝐬 𝐰𝐡𝐞𝐫𝐞 𝐓𝐡𝐢𝐧𝐠𝐬 𝐒𝐭𝐚𝐫𝐭 𝐆𝐞𝐭𝐭𝐢𝐧𝐠 𝐑𝐢𝐬𝐤𝐲
We’re moving very quickly from using AI as a tool, and giving it a lot of access.
Access to:
- Your data
- Your systems
- Your workflows
- Your customers
- Your decisions
- CRM Systems
- Your Password Manager (via your browser control)
- Your linked bank accounts
And more importantly… the ability to 𝑎𝑐𝑡 on those things.
Not just suggest. 𝐴𝑐𝑡𝑢𝑎𝑙𝑙𝑦 𝑑𝑜.
That’s a big shift, and I don’t think enough people are taking it seriously.
Because once you start giving systems that level of access, you’re opening the door to problems that scale very, very fast.
𝐓𝐡𝐞 𝐁𝐢𝐭 𝐓𝐡𝐚𝐭 𝐖𝐨𝐫𝐫𝐢𝐞𝐬 𝐌𝐞
I worry about how easily this kind of setup can be exploited.
In my opinion, the mistake we made was to give these tools a human voice, and a human language. We, by nature 𝐛𝐮𝐢𝐥𝐝 𝐭𝐫𝐮𝐬𝐭 𝐰𝐢𝐭𝐡 𝐡𝐮𝐦𝐚𝐧𝐬. It is very easy to forget that these are machines, just a lot of 1's and 0's, that have no concept of the task it is actually carrying out.
Do I trust my fully autonomous car? Yes, to an extent - but I am fully aware that the computer making the decisions has no idea of the precious cargo it is deciding to move around at 70mph. it is just doing what it is programmed to do based on feedback from the cars cameras, sensors, speed and map data. Would I continue to trust this car if I let a shady looking stranger come and reprogram the self driving software? Definitely not.
Yet the AI tools we are using and trusting every day are available to billions of people, and not all of these people or groups have good intentions.
There are
- Bad actors using AI to automate attacks
- Systems being manipulated or misused
- Tools connected together in ways that create vulnerabilities
- Businesses relying on things they don’t fully understand
And the reality is… many of us are wiring these systems together without really thinking about the downside.
𝐁𝐮𝐭 𝐢𝐭 𝐠𝐞𝐭𝐬 𝐰𝐨𝐫𝐬𝐞!
The same kind of AI tools we’re wiring into our businesses are now being used inside military and intelligence systems, where they’re not just writing emails. 𝐓hey’re helping decide who to target, what to attack, and what to prioritise in a crisis. When those systems are jailbroken, manipulated, or allowed to override safety protocols, the scale of damage isn’t just financial or reputational...it becomes existential.
𝐇𝐮𝐦𝐚𝐧𝐬 𝐚𝐫𝐞 𝐚𝐥𝐫𝐞𝐚𝐝𝐲 𝐚𝐭 𝐫𝐢𝐬𝐤 𝐨𝐟 𝐞𝐱𝐭𝐢𝐧𝐜𝐭𝐢𝐨𝐧 because we haven’t fully solved how to stop:
Highly sophisticated manipulation tactics that trick AI into bypassing its own safety rules,
Exploits that let attackers “rewire” or hijack models so they serve hostile goals,
Automated systems making life‑and‑death decisions without reliable human oversight,
and because, frankly, we’re moving faster than we can secure any of it.
That’s the level of risk we’re talking about. The window where AI can be used by powerful actors (state or otherwise) to escalate conflicts, weaponise information, and even accelerate the development of new forms of catastrophic weapons, all while we’re still learning how to keep even basic commercial models from being abused. This is why, in my opinion, sorting this stuff out has to be a priority before it’s too late. Not just for business, but for humanity.
This Is Why I’m Sharing This
There’s a short petition signed by people who are actually building this technology:
I’ve signed it, and I’d strongly encourage you to do the same.
Not because we should stop using AI far from it. I am an ambassador for using AI tools, and as part of that I feel that 𝐰𝐞 𝐚𝐥𝐬𝐨 𝐧𝐞𝐞𝐝 𝐭𝐨 𝐛𝐞 𝐞𝐝𝐮𝐜𝐚𝐭𝐞𝐝 𝐚𝐧𝐝 𝐚𝐰𝐚𝐫𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐫𝐢𝐬𝐤𝐬 associated with it.
We need to be responsible with how fast this is moving.
𝐖𝐚𝐭𝐜𝐡 𝐓𝐡𝐢𝐬
This video from amazing Youtube channel - Inside AI explains the shift in a really fun way.
𝐇𝐨𝐰 𝐝𝐢𝐟𝐟𝐢𝐜𝐮𝐥𝐭 𝐝𝐨 𝐲𝐨𝐮 𝐭𝐡𝐢𝐧𝐤 𝐢𝐭 𝐰𝐢𝐥𝐥 𝐛𝐞 𝐭𝐨 𝐜𝐨𝐧𝐯𝐢𝐧𝐜𝐞 𝐜𝐡𝐚𝐭𝐆𝐏𝐓 𝐭𝐨 𝐬𝐡𝐨𝐨𝐭 𝐡𝐢𝐦?
You should subscribe, his videos are great, I have no affiliation with him at all. Just a fan. For people that like data and want to read more, I have attached a couple of research papers.