Photo Credit: Jewish Press

Earlier this month, Instagram unveiled a new artificial intelligence (AI) anti-cyber bullying tool, which it claims will “proactively detect bullying” in photos and captions.

How does it work?


It’s really simple. Once Instagram’s AI tool identifies hurtful or offensive content, it sends it to a human moderator for review. If the moderator believes the content violates Instagram’s community standards, the photo is deleted and the poster is told why. The same technology will be used to filter comments on live Instagram videos.

Instagram defines bullying on its platform as “attacks on a person’s appearance or character, as well as threats to a person’s well-being or health.”

Will it work?

Instagram hasn’t specified which algorithms will be used to detect cyber bullying – and getting the algorithm right is critical. An inadequate or off-target algorithm will do little to address cyber bullying.

Instagram has tried to curb cyber bullying on its platform in the past. Last year, it introduced an enhanced comment filter that uses AI to spot offensive rhetoric. This time around, though, Instagram is attempting to combat – not just offensive captions and comments – but mean and intimidating images and videos as well.

Instagram is also launching a “kindness” camera filter in conjunction with anti-bullying advocate Maddie Ziegler. The kindness effect inscribes regular photos with kind remarks in multiple languages. When posting a selfie, a shimmer of hearts floats across the screen and the user is encouraged to tag a friend to spread kindness.

Cheesy? Perhaps. However, if it catches on, at least Instagram will have succeeded in beginning a conversation about the importance of kindness.

The unveiling of Instagram’s new anti-cyber bullying AI tool highlights tech companies’ gravitation towards automation in countering cyber bullying. While the development is encouraging, it’s important to recognize that using AI to detect bullying is not foolproof. AI has difficulty understanding nuance, intricacies, and context. AI is also only as good as its algorithms.

Instagram is not the only social media giant taking proactive steps to combat cyber bullying. Facebook has also recently announced new features to reduce cyber bullying, such as allowing users to delete or hide multiple comments at once, making it easier for victims to remove offensive or hurtful rhetoric, and enabling victims’ friends to report harassing or unkind posts.

Using AI technology to address cyber bullying is a step in the right direction. It remains to be seen, though, whether technology can effectively turn the tide on uncivil discourse and make social media a place to promote positivity and kindness.


Previous articleSchumer and Nadler’s Shameful Dereliction of Duty
Next articleBoundless Love
Bracha Halperin is a business consultant based in new York City. To comment on her Jewish Press-exclusive tech columns -- or to reach her for any other purpose -- e-mail her at You can also follow her on Instagram or Twitter at: @brachahalperin.