LONDON – JULY 10: In this photo illustration a girl browses the social networking site Facebook on July 10, 2007 in London, England. (Photo Illustration by Chris Jackson/Getty Images)
GETTY IMAGES
“This is a positive and necessary step that these companies have taken and one that should now be a springboard for these companies to tackle online abuse against women as a top priority,” Azmina Dhrodia, a WWWF senior policy advisor on gender and data rights, told Forbes. The pledge comes after 14 months of WWWF-sponsored virtual workshops attended by Facebook, Google, Twitter and TikTok representatives. All emerged with plans to provide users with enhanced abilities to control who can see, share and comment on their content, as well as more effective means of reporting abuse using new, more comprehensible tools.
Yet a timeline for each of the four giants’ implementations has not been shared. “It’s a balance of being really ambitious about this, making sure that this is their top priority, while also being realistic,” Dhrodia says.
Brianna Wu isn’t holding her breath. “If you’re not committing to measuring outcomes, I don’t think you’re serious,” the video game developer tells Forbes. Wu was a prominent target of Gamergate, the notorious campaign of organized online violence aimed at women in the video game industry that put gender-based Internet abuse in the public eye. In 2014, anonymous users posted personal information about Wu, including her address, on an 8chan messageboard. Multiple death and rape threats, among other forms of harrassment including horrific images sent through email, forced Wu to flee her home.
Now, Wu demands more than words. “When I read a report that says Facebook is committing to helping women file more harassment reports, that sounds great,” she tells Forbes. “But are they allocating more budget to have people respond to them? Are they committing to measuring the degree to which harassment is increasing or decreasing?”
Facebook—which posted on Wednesday about its commitment to advancing its Women’s Safety Hub but made no mention of the impending WWWF pledge—directed Forbes to its Community Standards Enforcement Report in response to a question about such metrics. The report has a section on Bullying & Harassment, and includes statistics about “content actioned” in response to instances thereof, but says nothing of harassment of women in particular, who are more commonly targeted than men are. In fact, the WWWF found that 38% of women and girls worldwide, and that the rates are even higher among women from marginalized groups, and particularly in the Black and LGBTQ communities.
Twitter and TikTok representatives emphasized safety as a priority but did not respond to questions about how new protocols developed at the workshops might measurably change abuse of women on their platforms. Google, too, demurred from discussing quantifiable results and instead directed Forbes to a Medium post by Jigsaw, Google’s unit that tracks and analyzes emerging threats. The post recounts Google’s participation in the WWWF pledge, and outlines its own digital safety program for women which “which aims to ensure the security of women around the world, by providing them with necessary skills to protect themselves against violence — both online and offline.”
“We are always hoping that the people being presented with the problem aren’t the ones being burdened with the solution,” says Elisa Lees Munoz, executive director of the International Women’s Media Foundation, which tracks online attacks against women journalists. She says the innovations outlined by the WWWF continue to place too much of the onus on the victims of internet violence. “We’ve spoken with journalists who spent countless hours taking screenshots, trying to get in touch with the platform, reporting their abusers, trying to track them down. These women have jobs to do.”
Past examples of spotty follow-through by the companies involved in the pledge do little to assuage the fears of those who believe this time around will be no different. In 2011, Facebook responded to a petition to remove pages advocating violence against women by removing them from the site. It did not, however, condemn systemic misogyny and gender-based violence on its platform. “Our experience has taught us that openness requires people to be free from harm, but also free to offend,” the social media company told the Toronto Star.
Two years later, Facebook responded to pressure from the activist group Everyday Sexism by outlining in its Public Safety forum measures it would take to address abuse on its platform. An expert-assisted review of its Community Standards, improved training for content moderators, accountability for creators of offensive or threatening content, better communication with civilian groups, and support of research conducted by organizations like the Antidefamation League were all part of the plan. “We need to do better — and we will,” Facebook pledged eight years ago.
Similarly, in 2014, Twitter executives proclaimed they’d take action after Zelda Williams, daughter of comedian Robin Williams, became the target of a brutal harassment campaign on the platform following the death of her father. “We will not tolerate abuse of this nature on Twitter…we are in the process of evaluating how we can further improve our policies to better handle tragic situations like this one,” wrote Del Harvey, VP of Trust & Safety, presaging the language the company’s Policy account tweeted on Thursday.
“It’s almost like a cycle,” Chloe Nurik, a social media scholar, says of social media’s promise to address gender-based internet violence. “People draw attention to it, and the sites take initiative that are helpful, but have very limited utility over time, and then problems persist, and then there’s media exposure, and then the site takes sort of a limited measure,” notes the doctoral candidate at the University of Pennsylvania’s Annenberg School for Communication. “That’s one of my primary concerns with the pledge: it just fits into this broader narrative of weak responses that aren’t upheld over time.”
Also problematic, says Nurik, is the financial incentive that social media giants have to allow inflammatory content to flourish on their sites: studies have examined how Gamegate generated enough revenue through site traffic to pay Reddit’s bills for a month.
Conversely, building and integrating new protections into every aspect of a social media platform will cost companies millions, says Kat Lo, a researcher and a consultant specializing in online harassment and content moderation. Lo is bullish on some of the solution prototypes introduced through the WWWF workshops, and particularly on one called Gateway, a sort of digital panic-button that would allow users to alert platforms when an attack is ongoing. She is more speculative on whether any of these prototypes will see the light of day. “In what amount of detail have social media companies committed to fulfilling any of these prototypes?” Lo says, rhetorically asking what many are thinking. “And how will they be held accountable to them?”