Tech firms race to spot
video violence
Send a link to a friend
[April 28, 2017]
By Jeremy Wagstaff
SINAGPORE
(Reuters) - Companies from Singapore to Finland are racing to improve
artificial intelligence so software can automatically spot and block
videos of grisly murders and mayhem before they go viral on social
media.
None, so far, claim to have cracked the problem completely.
A Thai man who broadcast himself killing his 11-month-old daughter in a
live video on Facebook this week, was the latest in a string of violent
crimes shown live on the social media company. The incidents have
prompted questions about how Facebook's reporting system works and how
violent content can be flagged faster.
A dozen or more companies are wrestling with the problem, those in the
industry say. Google - which faces similar problems with its YouTube
service - and Facebook are working on their own solutions.
Most are focusing on deep learning: a type of artificial intelligence
that makes use of computerized neural networks. It is an approach that
David Lissmyr, founder of Paris-based image and video analysis company
Sightengine, says goes back to efforts in the 1950s to mimic the way
neurons work and interact in the brain.
Teaching computers to learn with deep layers of artificial neurons has
really only taken off in the past few years, said Matt Zeiler, founder
and CEO of New York-based Clarifai, another video analysis company.
It's only been relatively recently that there has been enough computing
power and data available for teaching these systems, enabling
"exponential leaps in the accuracy and efficacy of machine learning",
Zeiler said.
FEEDING IMAGES
The teaching system begins with images fed through the computer's neural
layers, which then "learn" to identify a street sign, say, or a violent
scene in a video.
Violent acts might include hacking actions, or blood, says Abhijit
Shanbhag, CEO of Singapore-based Graymatics. If his engineers can't find
a suitable scene, they film it themselves in the office.
Zeiler says Clarifai's algorithms can also recognize objects in a video
that could be precursors to violence -- a knife or gun, for instance.
But there are limits.
One is the software is only as good as the examples it is trained on.
When someone decides to hang a child from a building, it's not
necessarily something the software has been programmed to watch for.
"As people get more innovative about such gruesome activity, the system
needs to be trained on that," said Shanbhag, whose company filters video
and image content on behalf of several social media clients in Asia and
elsewhere.
[to top of second column] |
Graymatics employees pretend to fight as they record footage to be
used to "train" their software to watch and filter internet videos
for violence, at their office in Singapore April 27, 2017.
REUTERS/Edgar Su
Another limitation is that violence can be subjective. A fast-moving
scene with lots of gore should be easy enough to spot, says Junle Wang,
head of R&D at France-based PicPurify. But the company is still working
on identifying violent scenes that don't involve blood or weapons.
Psychological torture, too, is hard to spot, says his colleague, CEO
Yann Mareschal.
And then there's content that could be deemed offensive without being
intrinsically violent -- an ISIS flag, for example -- says Graymatics's
Shanbhag. That could require the system to be tweaked depending on the
client.
STILL NEED HUMANS
Yet another limitation is that while automation may help, humans must
still be involved to verify the authenticity of content that has been
flagged as offensive or dangerous, said Mika Rautiainen, founder and CEO
of Valossa, a Finnish company which finds undesirable content for media,
entertainment and advertising companies.
Indeed, likely solutions would involve looking beyond the images
themselves to incorporate other cues. PicPurify's Wang says using
algorithms to monitor the reaction of viewers -- a sharp increase in
reposts of a video, for example -- might be an indicator.
Michael Pogrebnyak, CEO of Kuznech, said his Russian-U.S. company has
added to its arsenal of pornographic image-spotting algorithms - which
mostly focus on skin detection and camera motion -- to include others
that detect the logos of studios and warning text screens.
Facebook says it is using similar techniques to spot nudity, violence or
other topics that don't comply with its policies. A spokesperson didn't
respond to questions about whether the software was used in the Thai and
other recent cases.
Some of the companies said industry adoption was slower than it could
be, in part because of the added expense. That, they say, will change.
Companies that manage user-generated content could increasingly come
under regulatory pressure, says Valossa's Rautiainen.
"Even without tightening regulation, not being able to deliver proper
curation will increasingly lead to negative effects in online brand
identity," Rautiainen says.
(Reporting By Jeremy Wagstaff; Editing by Bill Tarrant)
[© 2017 Thomson Reuters. All rights
reserved.] Copyright 2017 Reuters. All rights reserved. This material may not be published,
broadcast, rewritten or redistributed. |