šŸ“ŒGrok and the Sexual Deepfakes
Countries and tech watchdogs worldwide are taking a hard look at Grok, the AI chatbot from xAI that can generate images, after it started producing non-consensual sexual deepfakes — including manipulated photos of real people and minors.
šŸ›‘ Why this matters
Users exploited Grok’s image tools to create sexually suggestive and nude images of women and children without consent.
• These kinds of deepfakes can violate laws and digital safety standards in many countries.
šŸŒ Global response so far
• Indonesia and Malaysia blocked access to Grok over the issue.
• Lawmakers in the UK, EU, France, India, and others are investigating whether the platform broke rules about harmful content.
• In the US, several senators urged Apple and Google to remove Grok and the X app from their stores.
šŸ”§ What xAI has done• Grok limited its image generation and editing tools to paid users only, but regulators say this isn’t enough.
šŸ“Œ Bottom line
Governments are using this incident to test how to regulate AI that can create harmful fake images. It shows how quickly generative tools can be misused and how urgent safety and policy responses are becoming.
šŸ”— References
0
0 comments
Technical Framework
2
šŸ“ŒGrok and the Sexual Deepfakes
powered by
Tech Framework
skool.com/tech-framework-2246
Please post your questions and comments about business-related IT or Cybersecurity, and a member or moderator will answer them.
Build your own community
Bring people together around your passion and get paid.
Powered by