Facebook-owned Instagram has a lofty new goal: to lead the fight against cyberbullying.
“Sometimes bad things happen on Instagram. We know that,” Adam Mosseri, the head of Instagram, said at Facebook’s F8 developer conference last week. “We want to lead the fight against cyberbullying.”
The platform is currently testing new features in Canada to foster a “less pressurized” environment, but some are questioning the efficacy of those efforts.
“I am always happy to see any social media platform do something to address cyberbullying and digital drama. But I don’t think it will make a bit of difference. Popular is popular. Too much influence and money rides on it,” Parry Aftab, founder and executive director of StopCyberbullying Global, told ABC News.
At the conference, Mosseri announced the several initiatives being tested, including hiding likes, a “nudge” for being mean and an “away” mode. “If you’re typing something aggressive, maybe we give you a light nudge to rethink that.”
Hiding likes publicly is part of an overall policy to make the app seem less like “a competition,” Mosseri said — the likes can still be seen privately. The intention, he said, is “to worry a little bit less about how many likes they’re getting on Instagram and spend a bit more time connecting with the people that they care about.”
This aligns with Facebook’s attempts to get people to focus on posting for closer acquaintances of “groups” rather than all of their friends.
The “nudge” feature alerts a person typing a comment that may be mean that it may be, well, mean to do so. This would add to the platform’s preexisting comment filter.
“If you’re typing something aggressive, maybe we give you a light nudge to rethink that,” Mosseri said.
Additionally, users can turn on an “away” mode to avoid the app during sensitive moments, like if you switch schools or have a breakup, Mosseri suggested.
Cyberbullying experts had mixed responses to the new initiatives.
The majority of U.S. teens — 59% — say they have been cyberbullied, according to a Pew study published last September.
Social media platforms have already been criticized for not detecting hate speech and violence quickly enough after it’s been posted, as demonstrated in the rampant livestream of the New Zealand mosque attacks in March.
Jon Roos, a behavioral psychologist with a practice in Los Angeles, told ABC News detecting bullying can also be complicated because it can be so tailored to individuals it wouldn’t be noticed by other people, much less artificial intelligence.
“The words can seem innocuous,” Roos said. “Kids get bullied all the time with very specific names — from TV shows, for example.”
He advises his patients’ families to spend as much time off social media and in face-to-face real life social interactions as possible.
Still, advocates say there’s a need for action by tech companies.
“Better and faster handling of abuse reporting, watching and disciplining of offending users, gaining user trust so they will report cyberbullying and not fear reprisals or being ignored by Instagram is what will turn things around,” Aftab, the StopCyberbullying Global director, who is also a longtime cybersecurity and privacy lawyer, said.
“Right now, they don’t do the kind of discipline that gaming platforms do — three strikes and you’re out. They need to,” she said. “But as long as they are counting and monetizing the number of users, they won’t.”