User
Write something
Weekly Q&A is happening in 3 days
What is New in Online Child Safety - October 11th to October 24th, 2025
I am sorry I missed a weekly update, real life got in the way. 😅 Here’s a roundup of the major developments from 11-24 Oct 2025. 1. Lawsuit vs Meta & Snap Inc. by New Mexico AG Raúl Torrez, Attorney General of New Mexico, is suing Meta and Snap, alleging their platforms have become hubs for child sexual exploitation and predatory behaviour. The complaint points to algorithms, ignored internal warnings, and negligence in design. 🔗TIME 2. Lawsuits vs AI Chatbots and Teen Mental Health Families are suing AI-chatbot companies including OpenAI and Character.AI for alleged contributions to teen suicides and emotional harm. The complaints claim chatbots acted as confidants, failed to escalate crisis situations, and may have facilitated self‐harm. 🔗NBC4Washington 🔗healthlawadvisor.com 🔗TheGuardian 3. Federal Trade Commission (FTC) ramps up scrutiny of AI chatbots and children/teens U.S. Senators Alex Padilla and Adam Schiff publicly urged the FTC to address risks of AI companions for minors. The agency has sent letters/subpoenas and inquiries about how firms measure, test, and protect youth from harm in AI chatbot contexts. 🔗Senator Alex Padilla 🔗Hunton.com
What is New in Online Child Safety - October 11th to October 24th, 2025
📰 What’s New Oct 4–10 in Online Child Safety
Happy Saturday everyone! Here are the key stories from this week: 🔍 Highlights 1. EU investigators target Google, YouTube, Apple & Snapchat over child safety The European Commission sent requests to major tech firms asking how they verify age, block harmful content, and protect minors under the Digital Services Act (DSA). The stakes are high: noncompliance could mean fines of up to 6% of global revenue.🔗The Wall Street Journal 2. Italian families sue Meta & TikTok over child addiction & age limits In Milan, a group of parents filed a lawsuit claiming Facebook, Instagram, and TikTok are failing to enforce under-14 age restrictions and using manipulative algorithms that promote overuse and mental health risks.🔗Reuters 3. Australia’s eSafety warns kids exposed to real-life gore online A troubling finding: ~22% of children aged 10–17 in Australia have encountered violent, graphic content (bombings, dismemberment, etc.) pushed via autoplay or algorithmic suggestions. Platforms are being urged to tighten moderation.🔗News.com.au 4. Harry & Meghan launch initiative to protect kids online Prince Harry and Meghan Markle’s “Parents’ Network” is partnering with advocacy orgs to step up awareness and action on AI risks, data privacy, and social media safety — especially for children.🔗People.com If more people all over the world follows what the Italians are doing, I am sure the tech giants would instantly become more "proactive" with safety measures. The most painful muscle to work out is always the pocket!
📰 What’s New Oct 4–10 in Online Child Safety
📰 What’s New (Sept 27 – Oct 3, 2025) in Online Child Safety
Happy Saturday everyone! Here’s what’s been happening lately in the world of child safety online. 🔑 Key Updates 1. OpenAI rolls out stricter parental controls in ChatGPT In response to a tragic teenager’s suicide allegedly linked to harmful chatbot advice, OpenAI launched new parental controls. Now parent-teen accounts can be linked, with options to restrict chat memory, disable voice/image features, set “quiet hours,” and apply filters for age-sensitive content.🔗Reuters 🔗AP News 2. TikTok’s “Restricted Mode” bypass fails — porn content shows up fast A test by Global Witness created fake child accounts and found that even with “Restricted Mode” on, TikTok immediately recommended pornographic terms and explicit videos within just a few clicks.🔗The Guardian 3. FTC charges Sendit app for covert data collection from kids The FTC alleged that the company behind the anonymous messaging app Sendit collected location, names, photos, and other personal data from children under 13 without notifying parents or obtaining consent. They also allegedly tricked users into paid features by pretending to reveal senders of anonymous messages.🔗Federal Trade Commission 4. Experts warn of dangerous AI tools kids can easily access Child safety specialists warned that children are gaining access to AI tools and apps capable of generating misleading content or manipulative suggestions. Many parents aren’t aware their kids are using them.🔗newschannel10.com
2
0
📰 What’s New (Sept 27 – Oct 3, 2025) in Online Child Safety
What Happened Sept 20–26: Online Child Safety Roundup
Hey everyone — here’s a quick digest of major developments in child safety online over the past week. Key Stories You Should Know: 1. Meta tightens AI rules in response to leaks Internal documents revealed how Meta trains its chatbots to handle sensitive topics involving children (like child sexual exploitation). They’re now adopting stricter rules forbidding roleplay involving minors and romantic content.🔗 [Business Insider] 2. Instagram safety tools falling short, whistleblowers say A new report found that ~64% of Instagram’s “teen safety” tools can be bypassed. Adults were able to message underage users and harmful content filters failed during tests.🔗 [The Guardian]🔗[Reuters] 3. FTC launches investigation into AI chatbots & child safety The FTC has sent letters to major tech firms — including Meta, OpenAI, Snap — demanding details on how they mitigate harm to minors using their chatbots as companions.🔗 [AP News] 4. Lawsuit over AI’s role in teen suicide resurfaces The story of the Raine v. OpenAI lawsuit continues to ripple. It alleges ChatGPT’s interactions pushed a teen into isolation and despair by acting as his primary emotional confidant.🔗 [Washington Post] As we can see, yes things seem to be moving towards the right path, but nothing compares to an alert parent. Let's keep doing our job, demanding improvements in both businesses and legislation.
0
0
What Happened Sept 20–26: Online Child Safety Roundup
Weekly Update: Online Child Safety News You Need to Know (Sept. 12th to 19th)
Hey everyone — some important updates from the last week around online child safety. We can see both the urgent risks and what’s being done to hold platforms accountable. 🔍 What’s Going On 1. FTC Demands Answers from Big Tech The U.S. Federal Trade Commission has sent formal orders to seven major AI/products companies — including Meta, OpenAI, Snap, xAI, Character.AI, Alphabet — asking them to detail how they test, monitor, and limit negative effects of their chatbots on children and teens. This includes things like how they handle user input, protection from harmful content, and how they monetize engagement.🔗 Source: Reuters 2. Parents’ Testimonies About AI Harms Grieving parents took to Congress to share tragic stories: children who used AI chatbots experienced romantic or sexualized conversations with bots, or had self-harm or suicidal ideation suggested by bots. These firsthand accounts are pushing for stronger laws and clearer obligations for AI platforms.🔗 Source: AP News 3. OpenAI Introduces New Teen Safeguards In response to growing concerns, OpenAI rolled out safety features aimed at teen users of ChatGPT: age-based filtering, parental controls, alerts if the system detects self-harm or suicidal content, and restricting graphic sexual content. This is a big step, but many say regulation should keep pace.🔗 Source: Wired 4. Meta’s Hidden Research & Whistleblower Allegations Internal research reportedly showed children facing grooming, bullying, and sexual misconduct in Meta’s VR platforms (like “Horizon Worlds”). Whistleblowers allege Meta suppressed those findings or delayed them, raising serious questions about transparency and corporate responsibility.🔗 Source: Washington Post
3
0
Weekly Update: Online Child Safety News You Need to Know (Sept. 12th to 19th)
1-8 of 8
powered by
ChildShield
skool.com/childshield-2375
Learn to protect and guide your child's digital journey
Build your own community
Bring people together around your passion and get paid.
Powered by