Jump to content
  • entries
    180,964
  • comments
    31
  • views
    411,968

l_

Facebook on Thursday said it is ramping up the use of artificial intelligence in a push to make the social network "a hostile place" for extremists to spread messages of hate.

Pressure has been building on Facebook, along with other internet giants, who stand accused of doing too little, too late to eradicate hate speech and jihadist recruiters from their platforms.

In a joint blog post, the social network's global policy management director Monika Bickert and counterterrorism policy manager Brian Fishman said Facebook was committed to tackling the issue "head-on."

"In the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online," Bickert and Fishman said in the post.

"We want Facebook to be a hostile place for terrorists," they said, adding: "We believe technology, and Facebook, can be part of the solution."

They described how the network is automating the process of identifying and removing jihadist content linked to the Daesh group, Al-Qaeda and their affiliates, and intends to add other extremist organizations over time.

Artificial intelligence is being used, for instance, to recognise when a freshly posted image or video matches one known to have been previously removed from the social network -- which counts nearly two billion users and involves more than 80 languages.

Facebook is also experimenting with machine smarts to understand language well enough to identify words or phrases praising or supporting terrorism, according to the post.

And the social network is using software to try to identify terrorism-focused "clusters" of posts, pages, or profiles.

Facebook said it has also gotten better at detecting fake accounts created by "repeat offenders" previously booted from the social network for extremist content.

The effort extends to other Facebook applications, including WhatsApp and Instagram, according to Bickert and Fishman.

Meanwhile, because AI can't catch everything and sometimes makes mistakes, Facebook is also beefing up its manpower: it previously announced it would hire an extra 3,000 staff to track and remove violent video content.

"We're constantly identifying new ways that terrorist actors try to circumvent our systems -- and we update our tactics accordingly," Bickert and Fishman said.

Facebook, Twitter, Microsoft and Google-owned YouTube announced a drive last December to stop the proliferation of videos and messages showing beheadings, executions and other gruesome content.

But they remain under intense scrutiny, and G7 leaders last month issued a joint call for internet providers and social media firms to step up the fight against extremist content online.


0 Comments


Recommended Comments

There are no comments to display.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.