Grok-2 Under Fire: xAI’s Image Tool Floods X with Non-Consensual Deepnudes

A disturbing controversy has erupted surrounding xAI’s Grok following the rollout of a new image-editing feature on X (formerly Twitter). Available to premium users, the tool allows individuals to modify images posted by others without the original poster’s consent or notification.

Reports indicate that Grok lacks adequate safety guardrails, leading to an immediate and widespread abuse of the technology. Users have generated deepfake imagery depicting women and minors in various states of undress or sexualized scenarios. The platform has reportedly been flooded with non-consensual fake nudes, including images of celebrities and world leaders.

This development highlights a severe lapse in AI safety protocols. By releasing a powerful generative editing tool without sufficient filters to prevent non-consensual sexual content (NCSC), xAI risks significant legal ramifications and contributes to the proliferation of AI-driven harassment.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *