As ISIS and other Islamic militant groups continue to terrorize our nation and other countries, social media networks like Facebook and Twitter are doing their part to make sure that these militant groups can’t use their networks to post propaganda and recruit new members.
On Friday, December 4, Facebook removed a profile that they believed belonged to Tashfeen Malik, the San Bernardino shooter who is accused of killing 14 people in a shooting attack in which the FBI is investigating as “an act of terrorism,” according to Reuters. One day earlier, on December 3, Reuters reports that the French prime minister and European Commission officials each met separately with Facebook, Twitter, and Google, as well as other companies, to demand faster action against terrorist militant groups on the Web. The social media companies are being discreet about what they’re doing, as they don’t want to get a bad reputation for “policing” the Internet.
As a result of those meetings, it appears the social networks are ready to oblige to a certain extent, but each network does have its own policy. They’ll ban certain content that doesn’t meet their terms of service criteria but require court orders to block or remove content that goes beyond that.
One thing to remember is that any user on these networks can report or flag content to be reviewed by the network, with the possibility of the content being taken down. That was the case for a French-speaking activist on Twitter, @NageAnon, who helped to rid YouTube of militant videos by alerting the network of policy violations on thousands of videos with the help of additional volunteers.
The truth about the social network’s compliance with Western law enforcement agencies may be skewed, though. According to former employees at Facebook, Twitter, and Google, the networks worry that if they’re public about their cooperation with the law enforcement agencies, they’ll face constant demands to remove content from other countries, in addition to being thought of by its users as chess pieces for the government.
While Reuters reports that the social networks don’t treat government complaints any different than citizen complaints, there are workarounds. If a government official were to complain that a threat, hate speech, or celebration of violence violates a social network’s terms of service, that content can then be taken down within minutes, without the paper trail that would accompany a court order. Facebook says it removed Malik’s profile because it violated the network’s community standards of promoting acts of terror, as there was pro-Islamic State content on Malik’s page.
Obviously, I think we’d all—including the government—like for this type of content not to ever make its way to the Internet in the first place, but there’d have to be technology implemented into the social media platforms that could scan a paragraph or image prior to a user actually posting it to the network.
Strides have been made to block out terrorist content, though. Twitter adjusted its abuse policy to ban indirect and direct threats of violence while Facebook has banned any content on its network that praises terrorists, as was the case with Malik’s profile. While we’d all like a lot more security against terrorist threats and propaganda on our social networks, I think we’re heading in the right direction.
And as the old adage goes, if you see something, say something. If you see content on your newsfeed or timeline that you think is questionable, mark it as inappropriate and let the social network investigate it on their end.
Images via Shutterstock