The very ugly reality behind internet abuse - Women's Agenda

The very ugly reality behind internet abuse

This week Victoria Brownworth’s story on the abuse Coralie Alison received after seeking to have a US rapper banned from Australia has been our most read story. By a very long shot.

It is a deeply disturbing read. Even skimming just a handful of the least-vile comments directed at Coralie is enough to despair for humanity. Truly.

This context inevitably raises the subject of how a social media platform, like Twitter, discharges its responsibility to protect users from such abuse. With good reason.

How to manage abuse and harassment is a real and evolving dilemma for social media platforms, as well as users.

At the very least, being on the receiving end of abuse or harassment undermines the enjoyment of any social media platform. At the worst it poses a risk to user’s safety.

How can this be managed? In an ideal world every violent and abusive tweet would be identified and blocked before it was sent. In the real world where 350,000 tweets are sent every minute (500 million tweets a day): how can this be managed?

It’s complicated not just by the fact social media thrives on providing a space for free expression, but because it thrives on immediacy and speed.

How to balance these objectives while ensuring the enjoyment and safety of users?

As the director of Twitter Australia’s public policy it’s a challenge Julie Inman-Grant considers regularly.

“Twitter wants all users to have a great experience on our service, and feeling safe is a key part of this,” she told Women’s Agenda. “While Twitter is a platform for free expression, there are rules that users must abide by, pertaining to things like harassment, abuse and a number of other areas.”

The organization has bolstered its resources significantly to help ensure users are protected. The company now reviews five times as many user reports as it did previously and has tripled the size of the support team focused on handling abuse reports.

“These investments in tools and people allow us to handle more reports of abuse with greater efficiency, and significantly reduce the average response time to a fraction of what it was,” she says. “The safety of our users is extremely important to us and is something we continue to work hard to improve upon.”

The reporting process has been streamlined so it now takes a matter of seconds. And the muting and blocking tools have been improved to help users better and more easily control their Twitter experience.

“We have also made it easier for those users experiencing violent threats to more easily report these tweets to local law enforcement,” Inman-Grant told Women’s Agenda. “We are also beginning to add several new enforcement actions for use against accounts that violate our rules. These won’t be visible to the vast majority of rule-abiding Twitter users, but give us new options for acting against the accounts that don’t follow the rules.”

Pre-moderating tweets isn’t possible so Inman-Grant encourages users to report any and all violations they see.

“We encourage all users to use our in-Tweet tools to report to get it in front of the right people – our safety team who will review and, where required, take action,” she says. “Reporting through these purpose-built in-tweet tools is vital to ensuring that Tweet or account is prioritised, reviewed and actioned.  Sending a Tweet to @TwitterAU, @Support, @Twitter or any other account does not ensure the tweet gets reviewed – these accounts were not created for this purpose.”

Individuals respond to abuse and harassment differently: some choose to share it, some mute or block users readily and I’m sure some users walk away from the platform completely.

Personally I report, block and mute regularly. I don’t block people whose opinions I disagree with or those who disagree with me. I do block people, often, who swear at me, abuse me or otherwise engage in conduct that I wouldn’t contemplate accepting in real life.  

Inman-Grant cautions against sharing the abusive material.

“We discourage retweeting this type of abusive content as it also doesn’t trigger a review, but can serve to perpetuate the abuse and give the abuser a wider audience.”

Some might argue that individuals should not bear the onus for protecting themselves on the platform. I have some sympathy for that perspective particularly in extreme cases like Coralie’s. Equally, however, I am unsure how social media platforms could realistically police all material.

I do believe that experiences like Coralie’s underscore and highlight the critical need for a rigorous, effective and adaptable legal framework for the digital age.

Because the aspect I find more disturbing than anything else in this case, is the fact these tweets weren’t sent by robots.

Yes social media platforms facilitate the mass-sharing of content, and in some instances that content is vile. But it is human beings – who live and work beside us – who create it. Who write it, who send it.

Each and every time I find myself blocking an abusive user, it feels like a hollow victory. I might not have to read their views in my feed anymore. My experience on the platform will be more enjoyable as a result.

But there’s no denying that an individual who is willing to put such vile words into a public forum exists. I think about that when I walk out of the office, hop on a bus or do the shopping in a busy supermarket. Among all those people, some of them are willing to denigrate, abuse and threaten others. And that is a disconcerting reality.

×

Stay Smart! Get Savvy!

Get Women’s Agenda in your inbox