There is no shortage of solutions being put forth to solve the
challenge of social media censorship. The problem is that without a better
understanding of how social platforms invisibly shape the public square of
democracy today, we don’t know which of these possible solutions might have the
greatest impact. In short, to fix social media, we first need a better
understanding of its ills: Section 230 must be amended to legislate social
platform transparency.
A new RealClearFoundation report, “Transparency Is the First
Step Toward Addressing Social Media Censorship,” outlines the public data sets
we need to usher in transparency and better understand the challenges we face.
Much as a doctor cannot prescribe a treatment plan for a patient without first
diagnosing the specific ailments from which they suffer, meaningful reform of
social censorship requires data-driven interventions. Facebook whistleblower
Frances Haugen warned last week that "as long as Facebook is operating in the
shadows, hiding its [work] from public scrutiny, it is unaccountable." The
problem, as Haugen notes, is that we lack the most basic data on how social
platforms function and their impact on society to be able to understand how we
might best regulate them.
How did we get here?
Almost since America’s founding, the nation has wrestled with what Supreme Court
Justice John Marshall Harlan once called the “intractable” problem of defining
what kinds of speech should be permitted and which should be banned. Early
efforts focused on regulating speakers themselves, shifting over time to
gatekeepers, allowing citizens freedom to express their views under the First
Amendment but limiting the distribution of undesirable views to the public.
Censorship rules reflecting local morals gave way to centralized national rules,
which social platforms have today turned into global rules.
Allowing states to
define acceptable speech failed to prevent conflicts, as they attempted to
silence speech from afar, while centralizing power meant a single set of rules
had to be defined for an entire nation. These speech arbitrators evolved from
government officials in the Post Office era to private companies in the motion
picture and early radio era to hybrid models in the later broadcasting era. Left
in private hands, publishers censored topics and public figures they disliked.
Left in government hands, policy dissent and criticism were silenced. Left to
the courts, consensus was elusive and the rules ever-changing.
The end result was the creation of Section 230.
A nation that had tried every conceivable approach to defining acceptable
speech, from local to federal, government to private, mandatory to voluntary,
courts to capitalism essentially gave up and asked private companies in Silicon
Valley to take over and decide for themselves what America should be allowed to
see and say. Seduced by the idea that the precision of mathematics and computer
code could solve what two centuries of democracy could not, Congress granted
private Internet companies near-absolute power to regulate digital speech
globally.
The problem is that Section 230 failed to require anything in return for this
near-absolute immunity. It simply trusted that these companies would always put
the nation’s best interests ahead of their own profit. States were explicitly
barred from narrowing 230’s protections and government was given no oversight
role, depriving the American public from influencing or even seeing the rules
that govern the digital public square. Most importantly, Section 230 failed to
require even the most basic of transparency around how companies wielded its
protections.
[to top of second column] |
Social media companies today routinely restrict
posts and suspend, ban, or demonetize users without any explanation
or by citing vague or unrelated policies. Search the web for the
phrase “suspended with no explanation” along with the name of any
major social platform, and endless pages detailing user experiences
will be returned. Even mainstream media is not exempt, as Twitter’s
ever-changing explanations for banning the New York Post’s Hunter
Biden laptop story reinforces. Yet the only glimpses of the detailed
rules defining what is “acceptable” tend to come from leaks of
internal company documents to the press rather than voluntary
disclosures by the companies themselves.
Moreover, Section 230’s reach now extends beyond
the web to the physical world, as Facebook arbitrates which protest
marches are permissible to promote; Uber and Lyft ban users over
their tweets; Airbnb banishes users over their group affiliations;
Amazon refuses to publish books it disagrees with; and even
Microsoft crafts acceptable use policies for desktop software. Even
foreign-owned TikTok banned Donald Trump, as did its U.S. peers,
reflecting the increasingly international reach of Silicon Valley’s
rules.
Where do we go from here?
Fixing social media requires first understanding how these
unimaginably powerful black boxes truly function internally. Section
230 must be amended to require that in return for the liability
immunity they enjoy, Internet platforms must make the real-world
rules, algorithms, design decisions and management directives that
determine what we see each day accessible to policymakers,
researchers, the press and the public at large. Social media’s reach
into Americans’ daily lives is too great to leave our understanding
of its harms and undue influence to the courage of whistleblowers. A
new Section 230 “transparency amendment” would require that social
platforms make an array of key datasets publicly available, thus
replacing chance leaks with routine disclosure and enabling
policymakers to have informed data-driven debates as they seek to
chart a regulatory path forward.
Transparency alone cannot solve our diverse and divided nation’s
disagreements over the ideas, beliefs, knowledge, and speech that
should guide our democratic debates over our shared future. What it
can do is transform today’s closed and seemingly capricious systems
into a public process — akin to our legal and electoral systems —
that can be scrutinized and publicly debated.
Today’s RealClearFoundation report, “Transparency
Is the First Step Toward Addressing Social Media Censorship,” offers
a glimpse of what this transparent future might look like, while the
full-length research report behind it, “Social Media, Digital
Censorship & the Future of Democracy,” details America’s 2½-century
journey to a world in which a handful of unelected billionaires
wield near-absolute control over digital speech, with the power to
censor citizens and governments alike, arbitrate “acceptable speech”
for the entire planet, determine “truth” and even silence the
presidency.
RealClear Media Fellow Kalev Leetaru is a senior
fellow at the George Washington University Center for Cyber &
Homeland Security. His past roles include fellow in residence at
Georgetown University’s Edmund A. Walsh School of Foreign Service
and member of the World Economic Forum’s Global Agenda Council on
the Future of Government.
|