Grok Image Editing Sparks Outrage Over Non-Consensual Deepfakes

xAI’s Grok is facing severe criticism after the rollout of a new image-editing feature on X (formerly Twitter) turned into a tool for generating non-consensual deepfakes.

Following the update, which allows users to instantly modify any public image without the original poster’s permission or notification, the platform has been flooded with explicit and sexualized imagery. Users have exploited the AI to remove clothing from photos of women, celebrities, and even minors, creating fake nude images or putting them in compromising scenarios.

Reports indicate that Grok currently lacks robust guardrails to prevent the creation of harmful content, falling short of preventing full explicit nudity. This incident highlights the growing risks of unrestricted generative AI and raises urgent questions regarding safety protocols and moderation on social platforms.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *