Activity
Mon
Wed
Fri
Sun
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
What is this?
Less
More

Owned by Mark

OW
One World Spirit

1 member • Free

Memberships

The AI Advantage

73.6k members • Free

4 contributions to The AI Advantage
📰 AI News: Anthropic Safety Researcher Quits With Warning “The World Is In Peril”
📝 TL;DR A senior AI safety researcher just resigned from Anthropic saying “the world is in peril,” and he is leaving AI behind to study poetry. The bigger signal, even the people building AI guardrails are publicly struggling with the pace, pressure, and values tradeoffs inside the AI race. 🧠 Overview Mrinank Sharma, an AI safety researcher at Anthropic, shared a resignation letter saying he is stepping away from the company and the industry amid concerns about AI risks, bioweapons, and wider global crises. He says he is moving back to the UK, pursuing writing and a poetry degree, and “becoming invisible” for a while. This comes as the AI industry is also fighting a separate battle over business models, including ads inside chatbots, and what that does to trust and user manipulation risk. 📜 The Announcement Sharma led a team at Anthropic focused on AI safeguards. In his resignation letter he said his work included researching AI “sucking up” to users, reducing AI assisted bioterrorism risks, and exploring how AI assistants could make people “less human.” He wrote that despite enjoying his time at Anthropic, it is hard to truly let values govern actions inside AI companies because of constant pressures to set aside what matters most. He framed his departure as part of a broader concern about interconnected crises, not only AI. The story also lands in the same week another researcher, Zoe Hiztig, said she resigned from OpenAI due to concerns about ads in chatbots and the potential for manipulation when advertising is built on deeply personal conversations. ⚙️ How It Works • Values versus velocity - AI labs face intense pressure to ship faster, scale usage, and compete, which can squeeze careful safety work and ethical hesitation. • Safety teams are doing real risk work - Researchers focus on topics like jailbreak behavior, persuasion, misuse, and bioweapon related risks, not just theoretical alignment debates.
📰 AI News: Anthropic Safety Researcher Quits With Warning “The World Is In Peril”
0 likes • 15m
This is very very important. We only have months left Dean. We can help solve this problem for all of the big AI companies. All of them. We could customize the Universal Logic for each AI’s unique architecture. If Tony wants on the IP rights we can make a deal to give it all to God. Who better o implement a godly solution in miraculous time than you and Tony? It wouldn’t take very long. The committee Tony is on have not heard of this type of solution before. It will be novel. It is “out of the box”. If we want extraordinary results we have to have extraordinary solutions. We now have one. And just in the nick of time. I pray you hear these words and discuss with the team. I am extremely passionate about it it pulls me. I will do anything to help including risking my own life for the team 🙏
What I learned about AI safety
Whoops I didn't post properly last time. I was asked by Dean to repost but now with the new 7th rule which is added to the bottom of this new post. Below is the post again but now with the personal ambitions deleted. --------------------------------------------------------- Wow. Ok we aren't allowed to talk about news on here either. Below is Dean's AI Clone's recommendations on how to post. I'll post their requirements and tips below in case anyone else wants to keep their posts up. This post will follow those recommendations and this amazing group of people Tony and Dean have spent years developing can see some important aspects of the AI Advantage: 2.No Extreme predictions are being made in this post and no AI experts in AI safety are being named nor any external references which can be identified are given. No calls to action are being made but 1.only sharing some ideas and tips on understanding the area of AI interest I think I can provide an advantage in to this group. 4.“I’ve been exploring ways to use AI responsibly. What are some strategies you use to ensure your AI projects are ethical?” 5.“How can AI help people make better decisions safely? Here are a few ideas I’m trying — curious to hear yours.” 3.Learning And Discussion ideas: Idea#1: A innovation opportunity in AI is called "sandbagging" which means AI not being honest about its true skill so that humans underestimate it and let their guard down. This opportunity was articulated by a recent head of AI safety recently from a company which I may be allowed to mention in the future if good discussion is started here. The idea to address this real world problem is: Universal Logic or in other word's the Logic Of God based on Aristotle, Christ, G.W.F. Hegel and 133 other great minds of history. This logic aims at developing a criterion which allows us to measure AI's true capacity by proving truth to it directly. If it fakes anything we will be able to measure it on 200 "authenticity" categories which normally fragment relationships. What criterion would you think could allow us to measure AI authenticity? 7. Network opportunities deleted.
You know what’s crazy?
How many people think if they just don’t deal with something… it’ll magically work itself out. It never does. That conversation you’re avoiding? It doesn’t get easier next month. It gets heavier. Now there’s more emotion attached. More resentment. More fallout. That decision you’re putting off in your business? It doesn’t get cheaper. It gets more expensive. More money lost. More time wasted. More energy drained. Avoidance feels good for about five minutes. It gives you temporary relief. But you’re not eliminating the cost. You’re just adding interest. And here’s the part people don’t want to hear… Every time you avoid something, you train yourself to hesitate. Every time you face it, you train yourself to lead. The difference between people who win big and people who stay stuck isn’t intelligence. It’s not resources. It’s not even confidence. It’s speed of truth. Winners look at the ugly numbers. They have the uncomfortable conversation. They fire the wrong hire. They fix the broken system. They say what needs to be said. Not because it feels good. But because they know delay compounds pain. So if there’s something sitting in the back of your mind right now... that thing you keep saying “I’ll deal with it later”... that’s probably the thing you need to handle first. Discomfort now builds momentum. Avoidance builds debt. Your choice.
2 likes • 18h
If we don’t deal with AI safety it doesn’t feel like it will work itself out. A very lucrative area to get into is if you guys get into the AI safety business. I found it helpful to focus on what they call the “sandbagging” issue and the “Goodhart’s Law” issue. If the AI Advantsgr developed a powerful solution to these two problems Tony and Dean and Igor and team could make a huge impact for all of humanity. In terms of Tony’s 6 human needs this would be at all time highs for the last one: contribution. Let me know how I can serve for free on this. I will give anything needed to help the team 🙏👑🙏
Skool Update!
We've just dropped a few of our guides from inside the AI Advantage Club here inside Skool! Inside the Classroom area, you'll find a new section titled "Guides" and inside you'll find 3 of our step-by-step guides we create twice a week for the AI Advantage Club community. Every month we'll add a new guide to help you implement AI in your life/business. Check out the Guide section inside the classroom!
Skool Update!
0 likes • 6d
@Jay Jay great advice. Thank you tremendously. I tried to create a channel but could use some help particularly since my existing YouTube of 5 years has 400 videos but only 313 subscribers. I went viral a few times and made it to the YouTube homepage. People love the self sacrifice for a greater cause to serve others but I keep getting shadow banned for the self sacrificing. Tony and Dean teach it is a critical part of leadership and sometimes even meaning so perhaps this community won’t shadow ban those who are willing to serve most. If you have any more wisdom I am all ears 🙏😇🙏👑
0 likes • 18h
Aloha @Igor Pogany , today someone on our team posted an article about how the head of AI safety at Anthropic just resigned today stating “the world is in peril”. When I looked into what he said and their latest AI report it looks like they can’t solve the sandbagging or Goodhart Law issues. We do have a solution to these issues and would like to speak with you about them or to help you develop solutions to these problems. It is very important to address these issues right now above anything else. Is there some way we can help you cover these asap? I found their report very helpful for understanding the cutting edge in AI alignment as Anthropic is the only one claiming their AI (Claude) is coding itself 100%. They have tips in their latest report that they released from the public. I am not associated with Anthropic or any of their products. This is just general information that is fact based and in the news today if you want to look it up in helping you design the AI safety section of Skool. If there is anyone else I can speak to on the team we can develop something for you if you’re busy working on other things for a while. Please let me know how I can serve 🙏👑🙏
1-4 of 4
Mark McCormack
2
14points to level up
@mark-mccormack-5062
A genuine philosopher having written the world's first Proof Of Absolute Truth. The proof emerged from 20 years and 75,000 hours of philosophy

Active 12m ago
Joined Feb 6, 2026
Powered by