Twitter Adds Heft to Anti-Harassment Toolbox

Twitter on Wednesday announced that over the next few months it will roll out changes designed to increase the safety of users:

  • Its algorithms will help identify accounts as they engage in abusive behavior, so the burden no longer will be on victims to report it;
  • Users will be able to limit certain account functionality, such as letting only followers see their tweets, for a set amount of time;
  • New filtering options will give users more control over what they see from certain types of accounts — such as those without profile pictures, or with unverified email addresses or phone numbers; and
  • New mute functionality will let users mute tweets from within their home timelines, and decide how long the content will be muted.

Read also: 4 Email Marketing Mistakes to Avoid

Twitter also will be more transparent about actions it takes in response to reports of harassment from users.

“These updates are part of the ongoing safety work we announced in January, and follow our changes announced on February 7,” a Twitter spokesperson said in a statement provided to TechNewsWorld by Liz Kelley of the company’s communications department.

A Fine Balance

“We’re giving people the choice to filter notifications in a variety of ways, including accounts who haven’t selected a profile photo or verified their phone number or email address,” the spokesperson noted.

The feature is not turned on by default but provided as an option.

Still, suggesting special handling for accounts without a profile picture — known as “eggs” because of the ovoid shape of the space left for the picture — and those without an email address or phone number could pose a privacy dilemma.

Twitter “is walking a fine line here between censorship and useful communication,” observed Michael Jude, a program manager at Stratecast/Frost & Sullivan.

Making the Internet Safe for Tweeters

Twitters’ ongoing efforts to curb abuse show that the company is “aware they have a serious problem, and what they’ve done so far is less than adequate,” remarked Rob Enderle, principal analyst at the Enderle Group.

Previous attempts ” were pretty pathetic, really, and Twitter needed to do something more substantive,” he told TechNewsWorld. “This seems to be far more substantive.”

Still, the new measures “don’t address the cause of the behavior — and until someone does, they will only be an increasingly ineffective Band-Aid,” Enderle cautioned.

No Place for the Timid

The latest tools may be successful at first, but “people will find ways around them,” Frost’s Jude told TechNewsWorld.

Twitter’s approach “is purely defensive,” he said. “It ought to just open up its space with the appropriate disclaimers; that would be easier and cheaper, and people who are easily offended would be put on notice that Twitter isn’t a safe space.”

The more controls Twitter attempts to impose, the less useful it will be to an increasing number of people, Jude contended. “Ultimately, Twitter may create a completely politically correct and safe place to socialize, but that will only appeal to a niche population.”

Online Crime and Punishment

Twitter’s defensive play is not enough; the hammer should be lowered on abusers, suggested Enderle.

“Efforts need to be made to hold those that are clearly over the line to more painful penalties to effectively address the causes of the behavior and not just the symptoms,” he maintained.

“Currently, laws and enforcement are well below what they should be for most abhorrent online activity,” said Enderle, “including things like identity theft that would typically be considered criminal acts.

xl-2017-email-1

Read more...

Twitter Steps Up Counterterrorism Efforts

Twitter last week announced it had suspended 235,000 accounts since February for promoting terrorism, bringing to 360,000 the total number of suspensions since mid-2015.

Daily suspensions have increased more than 80 percent since last year, spiking immediately after terrorist attacks. Twitter’s response time for suspending reported accounts, the length of time offending accounts are active on its platform, and the number of followers they draw all have decreased dramatically, the company said.

Twitter also has made progress in preventing those who have been suspended from getting back on its platform quickly.

Tools and Tactics

The number of teams reviewing reports around the clock has increased, and reviewers now have more tools and language capabilities.

Twitter uses technology such as proprietary spam-fighting tools to supplement reports from users. Over the past six months, those tools helped identify more than one third of the 235,000 accounts suspended.

Twitter’s global public policy team has expanded partnerships with organizations working to counter violent extremism online, including True Islam in the United States; Parle-moi d’Islam in France; Imams Online in the UK; the Wahid Foundation in Indonesia; and the Sawab Center in the UAE.

Twitter executives have attended government-convened summits on countering violent extremism hosted by the French Interior Ministry and the Indonesian National Counterterrorism Agency.

A Fine Balance

Twitter has been largely reactive rather than proactive, and that’s “been hit and miss, but from [its] standpoint, that’s probably the best they can do without being too draconian,” said Chenxi Wang, chief strategy officer atTwistlock.

“You could, perhaps, consider creating a statistical analysis model that will be predictive in nature,” she told TechNewsWorld, “but then you are venturing into territories that may violate privacy and freedom of speech.”

Further, doing so “is not in Twitter’s best interest,” Wang suggested, as a social network’s aim is for people “to participate rather than be regulated.”

Gauging Effectiveness

It’s not easy to judge Twitter’s success in combating terrorism online.

“How often does Twitter actually influence people who might be violent?” wondered Michael Jude, a program manager at Stratecast/Frost & Sullivan. “How likely is it that truly crazy people will use Twitter as a means to incite violence? And how likely is it that Twitter will be able to apply objective standards to making a determination that something is likely to encourage terrorism?”

The answers to the first two questions are uncertain, he told TechNewsWorld.

The last question raises “highly problematic” issues, Jude said. “What if Twitter’s algorithms are set such that supporters of Trump or Hillary are deemed terroristic? Is that an application of censorship to spirited discourse?”

There Oughta Be a Law…

Meanwhile, pressure on the Obama administration to come up with a plan to fight terrorism online is growing.

The U.S. House of Representatives last year passed the bipartisan Bill H.R. 3654, the “Combat Terrorist Use of Social Media Act of 2015,” which calls on the president to provide a report on U.S. strategy to combat terrorists’ and terrorist organizations’ use of social media.

The Senate Homeland Security and Governmental Affairs Committee earlier this year approved a Senate version of the bill, which has yet to be voted on in the full chamber.

“It’s probably a good idea for the president to have a plan, but it would need to conform to the Constitution,” Jude remarked.

“Policies haven’t yet caught up … . It’s not out of the question that government policies may one day govern social media activities,” Twistlock’s Wang suggested. “Exactly how and when remains to be seen.”

Automatic Counterterrorism

YouTube and Facebook this summer began implementing automated systems to block or remove extremist content from their pages, according to reports.

The technology, developed to identify and remove videos protected by copyright, looks for hashes assigned to videos, matches them against content previously removed for being unacceptable, and then takes appropriate action.

That approach is problematic, however.

Such automatic blocking of content “goes against the concepts of freedom of speech and the Internet,” said Jim McGregor, a principal analyst at Tirias Research.

“On the other hand, you have to consider the threat posed by these organizations,” he told TechNewsWorld. “Is giving them an open platform for promotion and communication any different than putting a gun in their hands?”

“The pros of automatic blocking terrorist content online are it’s fast and it’s consistent,” observed Rob Enderle, principal analyst at the Enderle Group.

“The cons are, automatic systems can be easy to figure out and circumvent, and you may end up casting too wide a net — like Reddit did with the Orlando shooting,” he told TechNewsWorld.

“I’m all for free speech and freedom of the Internet,” McGregor said, but organizations posting extremist content “are responsible for crimes against humanity and pose a threat to millions of innocent people and should be stopped. However, you have to be selective on the content to find that fine line between combating extremism and censorship.”

There is the danger of content being misidentified as extremist, and the people who uploaded it then being put on a watch list mistakenly. There have been widespread reports of errors in placing individuals on the United States government’s no-fly list, for example, and the process of getting off that list is difficult.

“I have one friend who’s flagged just because of her married name,” McGregor said. “There needs to be a system in place to re-evaluate those decisions to make sure people aren’t wrongly accused.”

Fighting Today’s Battles

The automated blocking reportedly being implemented by YouTube and Facebook works only on content previously banned or blocked. It can’t deal with freshly posted content that has not yet been hashtagged.

There might be a solution to that problem, however. The Counter Extremism Project, a private nonprofit organization, recently announced a hashing algorithm that would take a proactive approach to flagging extremist content on Internet and social media platforms.

Its algorithm works on images, videos and audio clips.

The CEP has proposed the establishment of a National Office for Reporting Extremism, which would house a comprehensive database of extremist content. Its tool would be able to flag matching content online immediately and flag it for removal by any company using the hashing algorithm.

Microsoft’s Contribution

Microsoft provided funding and technical support to Hany Farid, a professor at Dartmouth College, to support his work on the CEP algorithm.

Farid previously had helped develop PhotoDNA, a tool that scans and eliminates child pornography images online, which Microsoft distributed it freely.

Among other actions, Microsoft has amended its terms of use to specifically prohibit the posting of terrorist content on its hosted consumer services.

That includes any material that encourages violent action or endorses terrorist organizations included on the Consolidated United Nations Security Council Sanctions List.

Recommendations for Social Media Firms

The CEP has proposed five steps social media companies can take to combat extremism online:

  • Grant trusted reporting status to governments and groups like CEP to swiftly identify and ensure the removal of extremist online content;
  • Streamline the process for users to report suspected extremist activity;
  • Adopt a clear public policy on extremism;
  • Disclose detailed information, including the names, of the most egregious posters of extremist content; and
  • Monitor and remove content proactively as soon as it appears online
Read more...
Open chat
Hello
Glad you are here, would you need help
in getting a computer or accessories from us?
Powered by