Facebook pledges tough U.S. election security efforts as
critical memo surfaces
Send a link to a friend
[July 25, 2018]
By Joseph Menn
WASHINGTON (Reuters) - Facebook officials
on Tuesday said the company is using a range of techniques including
artificial intelligence to counter Russian operatives or others who use
deceptive tactics and false information to manipulate public opinion.
The officials told reporters in a telephone briefing they expected to
find such efforts on the social network ahead of the U.S. mid-term
elections in November, but declined to disclose whether they have
already uncovered any such operations.
Facebook has faced fierce criticism over how it handles political
propaganda and misinformation since the 2016 U.S. election, which U.S.
intelligence agencies say was influenced by the Russian government, in
part through social media.
The controversy has not abated despite Facebook initiatives including a
new tool that shows all political advertising that is running on the
network and new fact-checking efforts to inform users about obvious
falsehoods.
But the company reiterated on Tuesday that it will not take down
postings simply because they are false. Chief Executive Mark Zuckerberg
last week drew fire for citing Holocaust denials as an example of false
statements that would not be removed if they were sincerely voiced.
Tuesday's briefing, which included Nathaniel Gleicher, head of
cybersecurity policy, and Tessa Lyons, manager of Facebook's core "news
feed," came just before the publication of an internal staff message
from Facebook's outgoing chief security officer that was sharply
critical of many company practices.
The note by Alex Stamos, written in March after he said he was going to
leave the company, urged colleagues to heed feedback about "creepy"
features, collect less data and "deprioritize short-term growth and
revenue" to restore trust. He also urged the company's leaders to "pick
sides when there are clear moral or humanitarian issues."
[to top of second column] |
The Facebook logo is reflected on a woman's glasses in this photo
illustration taken June 3, 2018. REUTERS/Regis Duvignau/Illustration
Stamos posted the note on an internal Facebook site but Reuters confirmed its
authenticity. It was first disclosed by Buzzfeed News.
Stamos said the company needed to be more open in how it manages content on its
network, which has become a major medium for political activity in many
countries around the world. Tuesday's media briefing was part of the company's
efforts in that direction.
Lyons said the company was making progress in smoothing its process for
fact-checkers assigned to label false information. Once an article is labeled
false, users are warned before they share it and subsequent distribution drops
80 percent, Lyons said.
Posts from sites that often distribute false information are ranked lower in the
calculations that determine what each user sees but are not entirely removed
from view.
Gleicher said those seeking to deliberately promote misinformation often use
fake accounts to amplify their content or run afoul of community standards, both
of which are grounds for removing posts or entire pages.
He said the company would use a type of artificial intelligence known as machine
learning as part of its efforts to root out abuses.
(Reporting by Joseph Menn; Editing by Greg Mitchell, Jonathan Weber and Neil
Fullick)
[© 2018 Thomson Reuters. All rights
reserved.] Copyright 2018 Reuters. All rights reserved. This material may not be published,
broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|