Filling in the story - why I switched from OpenAI to Claude 🛠️
A dear friend of mine shared something very powerful, and I wanted to share it with you all:
"So, I made a switch recently from using OpenAI to Claude and I want to tell you why.
A few weeks ago, I was talking with a friend, degreed game developer and amazing human, about some of the info I was seeing about OpenAI and Anthropic and he shared something I researched and haven't been able to set aside since.
Here it is:
In late February 2026, Anthropic (the company behind Claude - CEO Dario Amodi) walked away from a Pentagon contract. The Department of Defense refused to write specific prohibitions against autonomous weapons and domestic mass surveillance directly into the agreement. Anthropic's position: if it's not in writing, it doesn't exist. They declined. The Trump administration then designated them a "supply chain risk" and ordered every federal agency to stop using their tools.
Hours later, OpenAI (Sam Altman) signed that deal.
OpenAI says the agreement includes safety principles around "human responsibility" in decisions involving the use of force. What it does not include are the specific, written contractual protections Anthropic required. As it's been widely reported, their position amounts to trusting that the government will follow the law on its own. No written guarantees and no hard lines in the contract itself. I sat with this for a while.
I use AI in my work every day for research and a variety of administrative tasks. I have a long corporate career full of difficult ethical and financial decisions. These kind of guiding principles are important to me. So, I had to ask myself the question: does it matter who makes the tool I choose to use? Do the ethics of a company carry into what comes through it? For me, it does. This is why I don't use Spotify even though Apple music is buns.
What I offer is only as trustworthy as how I choose to operate. That has to include the standards I hold for the tools I am using. I cannot teach peace and accountability in relationships and quietly exempt myself from the same.
To be clear, I am not here to tell anyone what to do. That is not my style or my right and I hold that reality with humility. Also, no company is without complexity, and the line between principled conviction and strategic positioning is worth examining in any of them, Anthropic included.
But when I had to choose between a company that signed without those written human protections and one that walked away from the deal because they were absent, the answer felt clear to me. So, despite spending 6mos building my OpenAI brain and protocols, I am in the process of switching everything to Anthropic's Claude.
I'm curious where you land, dear friends.
Do the ethics of a company factor into your choices about whose tools you use?"
5
9 comments
Dinka Salvador
6
Filling in the story - why I switched from OpenAI to Claude 🛠️
powered by
Skill & Soul Studio
skool.com/skill-soul-studio-7220
Learn simple, AI tools and systems that help you work smarter, stay calm, and grow with ease.
Build your own community
Bring people together around your passion and get paid.
Powered by