Subscribe

Twitter steps in to curb abuse

Social networks struggle to develop methods to monitor threats.

Jon Tullett
By Jon Tullett, Editor: News analysis
Johannesburg, 21 Aug 2013
Grace Dent received death threats on Twitter.
Grace Dent received death threats on Twitter.

Twitter and other providers are scrambling to tackle a growing problem of abuse on their networks. Governments, awareness groups, service providers and individuals are all struggling to come to terms with the darker side of modern communications.

Twitter has been the focus of complaints recently, with hate speech erupting on the micro-blogging site. Several female journalists found themselves the target of death and rape threats, calling attention to the social network's apparent lack of abuse controls.

Other networks are also under the spotlight. Ask.fm has had to tighten its operation after bullying led to a teen user's suicide, and the network appeared slow to respond to calls for action (this was not the network's first such incident). Across the Internet, social networks are dealing with the modern realities of abuse in global communication.

Action groups are calling for sites to be banned outright, or for government oversight. Cynical observers are calling for better parenting. The networks are caught in the middle, trying to please everyone.

But this is not a new problem; it's just one we've yet to solve. Twitter's updated mechanisms are thoughtful, measured, and unlikely to help much, but they are part of a gradual effort across the industry to improve matters.

'Griefing' and cyber rape

One of the earliest known examples of online abuse occurred in a multiplayer game, LambdaMOO. One of the participants simulated sexual acts against other avatars, in an incident documented by journalist Julian Dibbell in "A Rape in Cyberspace". Little action was taken at the time, but the writing was on the wall: online communities could be disrupted by small numbers of abusers.

Today, with global networks becoming part of everyday life, some communities are simply rife with abuse. Often it is merely puerile, as anyone who has been on the receiving end of a potty-mouthed Xbox online gamer will know.

Abuse in online gaming is common enough that we have a name for those who perpetrate it: griefer. Message boards like 4chan delight in robust free-for-all exchanges of views, and one ventures there at one's own risk. The ongoing debate about the dissolution of manners in faceless online networks, however, misses the point: sometimes people say nasty things, and growing a thick skin is part of going online. But there are times when abuse becomes dangerous, and authorities must intervene.

Abuse vs bullying

There are two flavours of abuse, broadly speaking. One is the simply illegal. Death threats, fraud, exposure of private information... these are illegal online and off. The second is the emotional abuse - cyber-bullying - which can be simply mean-spirited, but when the recipient is particularly vulnerable, can lead to tragic outcomes like teen suicides.

Twitter, like everyone else, wants to encourage communication while protecting its users from abuse.

And like the many other social platforms, Twitter has difficult ground to navigate here. Too much control, and conversation will be stifled and accusations of censorship will arise, possibly chasing users to rival platforms. Too little control and abuse can spiral out of control.

And any control is inherently subjective, a grey area with many complicated issues that require operators to draw their own line in the ground, and then risk the ire of innocent users who fall foul of processes designed to please the rest.

Many arguments for social network abuse controls revolve around protecting children. Cyber-bullying is a very real problem, with tragic cases of young suicides on the rise. The threat of online predators grooming children via social media is also a worry.

One thing is clear: it is a problem, and the pressure to address it is growing.

Historically, networks like Twitter prefer an egalitarian approach, offering an open network with a minimum of controls, where free speech is the order of the day and users either behave themselves or are simply blocked, or reprimanded by the community. Unfortunately, the theory didn't work out in practice, and the faceless anonymity of online communities has tended to allow abuse to take root and prosper.

Determined abusers can and will find loopholes to pursue their victims mercilessly, even if they think it is merely a prank.

Dealing with it now is like playing Whack-a-Mole. Determined abusers can and will find loopholes to pursue their victims mercilessly, even if they think it is merely a prank.

In the case of victims among the youth, that is particularly difficult to deal with. The counter-argument is often that parents should do a better job looking after their kids, with oversight of their online activities and better educating them about the risks. But the reality is that children are natural hackers, and will bypass their parents' best efforts to protect them.

Hannah Smith, the teen who hanged herself after suffering cyber-bullying on Ask.fm, had in fact been banned from the site by her parents. Better, more Internet-savvy parenting is absolutely needed, but it won't alleviate the need for networks to step up too.

Twitter treads carefully

Twitter's response to the sudden focus on abuse has been carefully measured, addressing the core issues as best it can, while acknowledging that the problem is bigger than it can handle. While some of its response is clearly careful manoeuvring to avoid liability, much is rational and well thought-out.

In its terms and conditions, the company notes that people can and do say offensive things online, and offers a breakdown of specific activities which step over the line, such as threats of violence. It stresses that outright abuse should be reported to law enforcement, since any action severe enough to count as abuse is almost certainly illegal. It describes some activities which can lead to account suspension (creating an account solely to direct vitriol at a victim, for example), and is rolling out easier access to its reporting pages so users can flag tweets as abusive.

Twitter also notes that abusive users can just create new accounts to harass victims - blocking IP addresses is ineffectual and can unfairly block other users (such as those on shared networks or dynamically-assigned addresses).

And in that admission is the core of the problem. Whatever measures Twitter puts in place, it can only prevent a subset of abuse - there will always be technical ways to circumvent technical filters. Dedicated abusers will use privacy networks like Tor or Web proxies to conceal their origin, creating new accounts faster than Twitter can possibly shut them down. The only way to moderate messages is to do it the hard way, with human oversight, which simply doesn't scale even if it were a route Twitter wanted to consider, which it doesn't. Manually approving 400 million tweets a day, in all the world's languages, is clearly not viable.

In some cases, it has to try anyway. Some countries have laws against specific types of hate speech, such as bans on anti-Semitism in France and Germany, and networks must make an effort to filter content on a country-by-country basis or risk being blocked entirely.

Local authorities, meanwhile, are about as helpless. Few take online threats seriously, and even if they do, lack the skills or jurisdiction to investigate. Social networks are subject to their home country's law enforcement jurisdiction, and foreign agencies must make enquiries through MLATs - Mutual Legal Assistance Treaties - with layers of complexity and bureaucracy to overcome even if you can find an officer who understands the problem. Your local charge office might take down your complaint, but the chance of any meaningful resolution is not high.

Vigilantism ahoy

Faced with the inability of networks and law enforcement to prevent abuse, some users attempt to self-police their online communities. Although this relies heavily on the reliability of the moderators themselves, and the scalability of the environment, it can work in practice. In some cases, though, it strays into vigilantism, with communities uniting in protest against abuse. This was once relatively common against spammers, with groups digging up and then revealing spammer's personal details so they could be targeted for real-world pranks or abuse. The process became known as "doxing", and carries the obvious risk that an innocent bystander could be accidentally targeted (or deliberately mis-targeted).

Online vigilantism is a touchy and wide-ranging subject, but arguably it is an argument which has already been won (or lost, depending on where you stand). Most social network users today, faced with abuse, do not pick up the phone to the police. Their first instinct is to report the behaviour to what they perceive as the real authority - the network itself. We expect Twitter, Facebook, Google, Microsoft, Sony, Yahoo and all the rest to set and enforce standards, and to police their communities.

The authorities, for their part, either tacitly or openly encourage this. Overworked, understaffed, unskilled and inexperienced, they really don't want to deal with the torrent of complaints from social networks.

Twitter, like everyone else, wants to encourage communication while protecting its users from abuse.

Whatever the legislation might say, our expectation of enforcement is placed with a third party, not with the genuine authorities. Similarly, we report fraud to our bank or credit card issuer, and phone abuse to our telecom provider, and that works because they operate within regulatory frameworks, which manage the interface between them and the authorities. Online, where such frameworks are non-existent or immature, the end result tends to be erratic enforcement and unhappy communities - PayPal for example (and in contrast to brick-and-mortar financial institutions) is frequently picked out for criticism of apparently arbitrary account lock-downs.

The action groups calling for laws to control online abuse are well-intentioned, but aiming at the wrong target. Governments shouldn't be enforcing standards on networks, but they could be working towards frameworks which demand more responsive, and more consistent, handling of reported abuse.

Some networks, like Google+, are gradually moving away from the 'pseudonimity' of online personas, encouraging or demanding users to identify themselves with real-world identities. This faces challenges of its own; not least the resistance from users who had become accustomed to the illusion of privacy afforded by a nickname on a message board. It also shows a general trend towards encouraging online users to communicate more responsibly.

Yet again, we are faced with an online phenomenon outgrowing our existing mechanisms. Parenting ends at the bedroom door, law enforcement ends at national borders, and service oversight ends outside the courtroom. Realistically, the threats online are no different from the real world: children have been always been exposed to bullying, cranks have always sent death threats, and fraud has always happened. This is not a new problem, but as the Web grows to subsume everyday life, people are turning to the new authority figures for solutions.

Share