User
Write something
Global Community Call is happening in 13 days
Disney and Open AI
What do people think about the Disney and OpenAI deal? https://www.bbc.com/news/articles/c5ydp1gdqwqo It "looks" like Disney is loosening the rules on its characters, but they’re actually doing the opposite. When content was made by people, Disney could approve every use one by one. AI breaks that model. There are just too many possibilities. So instead of controlling every outcome, Disney is building the rules into the system itself. That means character values, emotional boundaries, and storytelling logic are baked in before anything is created. Some ideas are instantly fine. Some are possible, but only with the right context. And some just don’t work unless there’s a much bigger story behind them. Disney is doing this because in an AI world, you can’t just protect characters after the fact. You have to decide how they’re allowed to exist before they’re created.
New Brand. New Vision. New Year.
Hey beautiful humans 💫 As we close out the year, I want to take a moment to say thank you for being here, building with us, and co-creating something truly powerful. We’re stepping into a bold new era for SHE IS AI. ✨ New brand ✨ New energy ✨ New focus ✨ New look ✨ New pathways to grow, learn, lead, and rise. And in 2026, here’s some of what’s coming: 🔥 New live events, roundtables & fireside chats 🧠 Opportunities for YOU to lead workshops & host calls 📚 Our new AI Book & Author’s Club 🎓 A focused, powerful learning path to guide your growth 🤝 Community collabs with AI software & platforms 🎯 Weekly challenges 🎯 Build and content sprints 🎯 More deep-dive trainings 🎯 Daily community calls This next chapter is all about amplifying your voice, your work, and your leadership with AI through connection, learning, and co-creation. We’re so grateful you’re here and we can’t wait to see you in the new year. 💥 With love, momentum, and vision, Amanda & the SHE IS AI Team
New Brand. New Vision. New Year.
Quick MIDJOURNEY Prompts FAQ
Food for Thought: the word REALISTIC implies that something is NOT REAL. Real things are never called realistic, are they? Since the word only applies to things that aren't actually real, asking Midiourney for something realistic will likely get you something that does not look REAL. If you want a photo, just say photo or a photgraphic image with cinematic lighting, etc... 😉
Quick MIDJOURNEY Prompts FAQ
Happy Birthday, Nagawa! 🎉
Hey everyone, let's take a moment to celebrate Nagawa today. It's her birthday! I wanted to say thank you for everything you bring to this community. You show up to calls that start after midnight your time, and you always come ready with creative ideas and thoughtful input. Your contributions make a real difference in our discussions. You're appreciated more than you know. Hope you have an amazing birthday!
Happy Birthday, Nagawa! 🎉
Building Credibility in an AI-Swamped World
Why more AI doesn’t automatically mean more trust. We are living through the peak of the AI hype cycle. Trillions of dollars are being poured into infrastructure, tools, and promises of productivity. But beneath the optimism sits a quieter problem: credibility erosion. More AI-generated information doesn’t automatically lead to better outcomes. In many cases, it does the opposite. This article is a curated and abridged reflection on a talk by Eva Digital Trust, exploring how genAI, when used carelessly, can quietly undermine trust, expertise, and brand credibility and what to do instead. 👉 FULL CREDIT FOR THIS PIECE at the bottom of this post. 1. AI hype doesn’t equal value: AI investment numbers are staggering, but hype alone doesn’t deliver ROI. When use cases are vague and productivity gains don’t materialise, pressure builds, especially on leaders, to prove AI is “working.” The problem isn’t AI itself. It’s deploying it without clarity, strategy, or accountability. 2. Hallucinations are a feature, not a bug: Large language models don’t “know” facts, they predict patterns. That means hallucinations are inherent to how they work. The danger is subtle: outputs often sound confident, structured, and professional, while quietly being wrong, irrelevant, or misaligned with context, regulation, or real constraints. This leads to what’s now called “workslop” polished-looking content that creates more rework, more risk, and more cost. You can’t slop your way to a credible strategy, product, or point of view. 3. Visible AI use can trigger bias: Research shows that openly disclosing AI use can lower perceptions of competence, particularly for women, older workers, neurodivergent professionals, and people writing in a second language. AI may aim for neutrality. Humans do not. This means credibility isn’t just about whether you use AI, but how visibly and how thoughtfully you use it. 4. Trust in AI is deeply divided: Global trust in AI splits roughly into thirds: trust, distrust, and uncertainty. But the differences across demographics are stark. People in the Global South tend to be more optimistic. The Global North, particularly older professionals and women, is far more sceptical. If your audience is cautious, “AI-powered” messaging may actively backfire. Context matters.
Building Credibility in an AI-Swamped World
1-30 of 286
SHE IS AI Community
skool.com/she-is-ai-community
Learn, build, and lead with AI boldly and ethically. We're a global community using AI to amplify purpose, creativity, and shape the future of AI.
Leaderboard (30-day)
Powered by