Exclusive: Facebook to ban misinformation
on voting in upcoming U.S. elections
Send a link to a friend
[October 16, 2018]
By Joseph Menn
MENLO PARK, Calif. (Reuters) - Facebook Inc
will ban false information about voting requirements and fact-check fake
reports of violence or long lines at polling stations ahead of next
month's U.S. midterm elections, company executives told Reuters, the
latest effort to reduce voter manipulation on its service.
The world's largest online social network, with 1.5 billion daily users,
has stopped short of banning all false or misleading posts, something
that Facebook has shied away from as it would likely increase its
expenses and leave it open to charges of censorship.
The latest move addresses a sensitive area for the company, which has
come under fire for its lax approach to fake news reports and
disinformation campaigns, which many believe affected the outcome of the
2016 presidential election, won by Donald Trump.
The new policy was disclosed by Facebook's cybersecurity policy chief,
Nathaniel Gleicher, and other company executives.
The ban on false information about voting methods, set to be announced
later on Monday, comes six weeks after Senator Ron Wyden asked Chief
Operating Officer Sheryl Sandberg how Facebook would counter posts aimed
at suppressing votes, such as by telling certain users they could vote
by text, a hoax that has been used to reduce turnout in the past.
The information on voting methods becomes one of the few areas in which
falsehoods are prohibited on Facebook, a policy enforced by what the
company calls "community standards" moderators, although application of
its standards has been uneven. It will not stop the vast majority of
untruthful posts about candidates or other election issues.
“We don’t believe we should remove things from Facebook that are shared
by authentic people if they don’t violate those community standards,
even if they are false,” said Tessa Lyons, product manager for
Facebook's News Feed feature that shows users what friends are sharing.
Links to discouraging reports about polling places that may be inflated
or misleading will be referred to fact-checkers under the new policy,
Facebook said. If then marked as false, the reports will not be removed
but will be seen by fewer of the poster's friends.
Such partial measures leave Facebook more open to manipulation by users
seeking to affect the election, critics say. Russia, and potentially
other foreign parties, are already making "pervasive" efforts to
interfere in upcoming U.S. elections, the leader of Trump's national
security team said in early August.
Just days before that, Facebook said it uncovered a coordinated
political influence campaign to mislead its users and sow dissension
among voters, removing 32 pages and accounts from Facebook and Instagram.
Members of Congress briefed by Facebook said the methodology suggested
Russian involvement.
Trump has disputed claims that Russia has attempted to interfere in U.S.
elections. Russian President Vladimir Putin has denied it.
WEIGHING BAN ON HACKED MATERIAL
Facebook instituted a global ban on false information about when and
where to vote in 2016, but Monday's move goes further, including posts
about exaggerated identification requirements.
[to top of second column]
|
A woman looks at the Facebook logo on an iPad in this photo
illustration taken June 3, 2018. REUTERS/Regis Duvignau/Illustration
Facebook executives are also debating whether to follow Twitter
Inc's recent policy change to ban posts linking to hacked material,
Gleicher told Reuters in an interview.
The dissemination of hacked emails from Democratic party officials
likely played a role in tipping the 2016 presidential election to
Trump, and Director of National Intelligence Dan Coats has warned
that Russia has recently been attempting to hack and steal
information from U.S. candidates and government officials.
A blanket ban on hacked content, however, would limit exposure to
other material some believe serves the public interest, such as the
so-called Panama Papers, which in 2015 made public the extensive use
of offshore tax havens by the world's wealthy.
Months ago, senior Facebook executives briefly debated banning all
political ads, which produce less than 5 percent of the company's
revenue, sources said. The company rejected that because product
managers were loath to leave advertising dollars on the table and
policy staffers argued that blocking political ads would favor
incumbents and wealthy campaigners who can better afford television
and print ads.
Instead, the company checks political ad buyers for proof of
national residency and keeps a public archive of who has bought
what.
Facebook also takes a middle ground on the authenticity of personal
accounts. It can use automated activity it finds to disable pages
spreading propaganda, as happened last week, but it does not require
phone numbers or other proof of individual identity before allowing
people to open accounts in the first place.
On the issue of fake news, Facebook has held off on a total ban,
instead limiting the spread of articles marked as false by vetted
fact-checkers. However, that approach can leave fact-checkers
overwhelmed and able to tackle only the most viral hoaxes.
“Without a clear and transparent policy to curb the deliberate
spread of false information that applies across platforms, we will
continue to be vulnerable,” said Graham Brookie, head of the
Atlantic Council’s Digital Forensic Research Lab.
(Reporting by Joseph Menn; Editing by Greg Mitchell, Bill Rigby and
Leslie Adler)
[© 2018 Thomson Reuters. All rights
reserved.]
Copyright 2018 Reuters. All rights reserved. This material may not be published,
broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|