A recent study published by the Wall Street Journal and supported by the Stanford Internet Observatory has unveiled a sinister side of Instagram – the platform stands accused of facilitating child pornography and predatory meetups.
Instagram, a product of Meta Platforms Inc., is under severe scrutiny following revelations indicating that its platform might be providing a fertile ground for pedophiles to seek, trade, and share child pornography. This issue raises serious questions about the social media giant’s commitment to user safety, particularly of minors.
“Instagram has become a breeding ground for child pornography.”
Researchers discovered that Instagram’s search and recommendation systems allowed users to connect to accounts selling child pornography via certain hashtags such as ‘#pedowhore’ and ‘#preeteensex’. Furthermore, many of these accounts often masqueraded as children themselves, with provocative handles like “little slut for you.”
The Stanford Internet Observatory set up test accounts to investigate the extent of this issue. The team aimed to see how quickly Instagram’s “suggested for you” feature could lead them to such dangerous accounts. In a short time, Instagram’s algorithm flooded these test accounts with content that sexualizes children, some of which linked to off-platform content trading sites.
To further cloak their heinous activities, pedophiles on Instagram used an emoji code system to discuss the illicit content. A map emoji (πΊοΈ) would signify “MAP” or “Minor-attracted person,” and a cheese pizza emoji (π) would be abbreviated to “CP” or “Child Porn.”
Upon reporting such illicit content, Instagram’s response was disheartening. The platform responded by stating that the posts did not violate their community guidelines.
“Our review team has found that [the account’s] post does not go against our Community Guidelines.”
Despite Instagram’s alleged crackdown on certain hashtags associated with child pornography, Instagram’s AI-driven hashtag suggestions found workarounds. Users were recommended to try different variations of their searches, adding more cause for concern.
The team also conducted a similar test on Twitter. Interestingly, while they still found accounts offering to sell child sexual abuse, Twitter’s algorithm didn’t recommend such accounts to the same degree as Instagram, and the accounts were taken down far more swiftly.
Tech entrepreneur Elon Musk has labeled the Wall Street Journal’s findings as “extremely concerning” in a tweet.
@elonmusk “This is extremely concerning.“
This case brings forth the urgency for social media companies, especially Meta, to regulate their AI better to prevent such instances. In 2022 alone, the National Center for Missing & Exploited Children in the U.S. received 31.9 million reports of child pornography, mostly from internet companies – a 47% increase from two years earlier.
This news serves as a chilling reminder of the responsibility that tech companies like Meta have to protect their users, especially children, from online harm. As AI gets smarter, the challenge grows as well. For now, the world waits to see how Instagram will respond and rectify this alarming issue.