Google to tackle YouTube terror videos
Google has unveiled four measures it will use to tackle the spread of terror-related material online.
Among the measures it is deploying will be smarter software that can spot extremist material and greater use of human experts to vet content.
It said terrorism was an “attack on open societies” and tackling its influence was a critical challenge.
It said it had worked hard to remove terrorist content for a long time but acknowledged that more had to be done.
The steps it plans to take were outlined in an editorial published in the Financial Times newspaper.
The steps apply mainly to Google’s video sharing site YouTube.
It said it would:
1). put more engineering resources into training software that uses “machine learning” to identify videos glorifying terrorism and violence
2). recruit more people to act as “trusted flaggers” who could make final decisions about videos its software struggled to classify
3). take a tougher stance on videos that violated YouTube policies
4). expand the work it did to help counter-radicalisation efforts
In addition, it said, it would work with Facebook, Microsoft and Twitter to establish an industry body that would produce technology other smaller companies could use to police problematic content.
Labour MP Yvette Cooper said Google’s pledge to take action was “welcome”.
Chairing of the House of Commons Home Affairs Select Committee, Ms Cooper oversaw a report that was heavily critical of social networks and the efforts they took to root out illegal content.