Instagram is developing a new security feature to prevent users from receiving sexually explicit photos in their inboxes. the “nudity protection” feature works to eliminate instances of online harassment known as “cyberflushing.”
Recently, there has been a significant increase in cyber girlfriend flash incidents involving sending unsolicited sexual messages to strangers (often women). This new feature attempts to combat this threat by allowing users to automatically filter Direct Her message requests that contain objectionable content. Meta uses machine learning to help people protect themselves from nude photos and other unwanted messages.
Meta developer Alessandro Pauzzi also first saw the new feature on Twitter. He shared a screenshot of the potential feature in a tweet, stating, “Instagram is working to protect nudity in chats. This technology targets photos that may contain nudity in chats. Instagram can’t access your photos.”
A Meta spokesperson confirmed this, telling The Verge that they are working closely with experts to ensure that these new features help protect people’s privacy and control the communications they receive. The technology prevents Meta from displaying the actual message or sharing it with third parties.
This feature adds to the existing Hidden Words feature introduced in 2021. Users can use the Hidden Words feature to automatically filter offensive words, phrases and emoticons into protected folders. This tool also filters out DM requests that are most likely spam.
Meta added that the feature is still in development and that users will be able to turn it on and off as they please once it’s released. Recently, the Indian messaging platform owned by Meta introduced parental control over minor users of its social media platform. This feature is seen as the next step in protecting user privacy and security.