Twitter Adds Heft to Anti-Harassment Toolboxnanish
Twitter on Wednesday announced that over the next few months it will roll out changes designed to increase the safety of users:
- Its algorithms will help identify accounts as they engage in abusive behavior, so the burden no longer will be on victims to report it;
- Users will be able to limit certain account functionality, such as letting only followers see their tweets, for a set amount of time;
- New filtering options will give users more control over what they see from certain types of accounts — such as those without profile pictures, or with unverified email addresses or phone numbers; and
- New mute functionality will let users mute tweets from within their home timelines, and decide how long the content will be muted.
Read also: 4 Email Marketing Mistakes to Avoid
Twitter also will be more transparent about actions it takes in response to reports of harassment from users.
“These updates are part of the ongoing safety work we announced in January, and follow our changes announced on February 7,” a Twitter spokesperson said in a statement provided to TechNewsWorld by Liz Kelley of the company’s communications department.
A Fine Balance
“We’re giving people the choice to filter notifications in a variety of ways, including accounts who haven’t selected a profile photo or verified their phone number or email address,” the spokesperson noted.
The feature is not turned on by default but provided as an option.
Still, suggesting special handling for accounts without a profile picture — known as “eggs” because of the ovoid shape of the space left for the picture — and those without an email address or phone number could pose a privacy dilemma.
Twitter “is walking a fine line here between censorship and useful communication,” observed Michael Jude, a program manager at Stratecast/Frost & Sullivan.
Making the Internet Safe for Tweeters
Twitters’ ongoing efforts to curb abuse show that the company is “aware they have a serious problem, and what they’ve done so far is less than adequate,” remarked Rob Enderle, principal analyst at the Enderle Group.
Previous attempts ” were pretty pathetic, really, and Twitter needed to do something more substantive,” he told TechNewsWorld. “This seems to be far more substantive.”
Still, the new measures “don’t address the cause of the behavior — and until someone does, they will only be an increasingly ineffective Band-Aid,” Enderle cautioned.
No Place for the Timid
The latest tools may be successful at first, but “people will find ways around them,” Frost’s Jude told TechNewsWorld.
Twitter’s approach “is purely defensive,” he said. “It ought to just open up its space with the appropriate disclaimers; that would be easier and cheaper, and people who are easily offended would be put on notice that Twitter isn’t a safe space.”
The more controls Twitter attempts to impose, the less useful it will be to an increasing number of people, Jude contended. “Ultimately, Twitter may create a completely politically correct and safe place to socialize, but that will only appeal to a niche population.”
Online Crime and Punishment
Twitter’s defensive play is not enough; the hammer should be lowered on abusers, suggested Enderle.
“Efforts need to be made to hold those that are clearly over the line to more painful penalties to effectively address the causes of the behavior and not just the symptoms,” he maintained.
“Currently, laws and enforcement are well below what they should be for most abhorrent online activity,” said Enderle, “including things like identity theft that would typically be considered criminal acts.