The Washington PostDemocracy Dies in Darkness

Mass propaganda used to be difficult, but Facebook made it easy

Americans want — and need — regulation of microtargeting now

Perspective by
and 
February 14, 2020 at 6:26 a.m. EST
(Loic Venance/AFP/Getty Images)

Last month, Facebook announced that it will not limit microtargeting capabilities for political ads. Nor will it police factual accuracy of those ads. As much as the spread of blatant falsehoods on social media rankles Americans, that second judgment may be wise, as Facebook has neither the capacity nor the legitimacy to act as arbiters of truth. But the propagation of falsehoods along with the targeting to narrow but receptive audiences is a dangerous combination. And recent public opinion data suggest that the public overwhelmingly agrees.

While concern over misinformation may seem like a post-2016 phenomenon, it isn’t. In the 1940s, the rapid rise in mass media and the devastating use of mediated propaganda by the Nazis caused great alarm. To alleviate fears that this propaganda would overwhelm the United States, Columbia University sociologists Paul Lazarsfeld and Robert Merton suggested that mass persuasion through the American media system was quite difficult. They outlined three criteria necessary for its success — monopolization, canalization and supplementation — and explained how these criteria were nearly impossible to meet in the media system at the time.

But our media system has changed, and the reasons cited in this scholarly plea for calm in 1948 are the very reasons we should be worried in 2020 — and why we must demand regulation of Big Tech right now.

When Lazarsfeld and Merton were writing, the American media landscape was dominated by broadcast radio and newspapers. By 1945, radios were in virtually every home and the broadcast industry was consolidated, with four networks owning almost all the radio stations in the United States.

Lazarsfeld and Merton argued that propaganda in this media ecosystem was unlikely to work, because it required conditions that didn’t exist. First among these was “monopolization,” or the absence of countermessaging. In 1948, because of the size of the audiences tuning into network radio shows, media industries made money by selling national ads broadcast to giant mass audiences. While there would be ads for Coke, there would also be ads for Pepsi. While there would be ads for Republicans, there would also be ads for Democrats. The means did not exist to segment audiences with precision so that Coke-drinking Republicans only received ads for Coke and Republican candidates.

This inability to target messages with precision had another consequence that made a successful propaganda campaign unlikely: According to Lazarsfeld and Merton, propaganda could not persuade people of something totally alien to their beliefs. Successful propaganda required “canalization,” they said; it must build upon preexisting attitudes and values to be successful. Water couldn’t flow in the correct direction without a canal, the thought went. With media having such a broad reach, there was no capacity to target audiences based on preexisting views — so strategic canalization was impossible.

But the mass audience that Lazarsfeld and Merton described hasn’t existed for over 40 years. As cable television became popularized in the 1980s, media outlets proliferated; audiences increased in number but had shrunk in size. Network executives crafted niche programming to attract certain kinds of viewers to sell to advertisers; hence the birth of MTV, BET and Playboy (to name a few) between 1979 and 1982.

With early Internet access reaching homes in the 1990s, developers built upon this now-familiar “fragmented media model” that had dominated the economics of cable. Most early commercial websites were thus aimed at audiences and consumers whom corporations would target with information, goods and services without necessarily capitalizing on the inherent interactivity of the technology.

But, the dot-com crash of the early 2000s motivated web developers and venture capitalists to reconsider whether that economic model could sustain the Internet. It had failed to capitalize on the Internet’s unique affordances: The technology allowed users to create and share information thanks to the decentralization of control of information across the network — there was no gatekeeper. In fact, this was the reason the Department of Defense invested in the creation of the Internet during the Cold War: to decentralize control over information in the event that a military attack should destroy records, data, plans or documents.

In the aftermath of the dot-com crash, these properties became the focus of new platform development. Sometimes called Web 2.0, or the Social Web, platforms like Myspace (2003), Facebook (2004), YouTube (2005) and Twitter (2006) capitalized on the decentralized logic of the network to empower users to create and share content, rate and like content, and connect with one another.

Doing so offered up another profit model — turning every share, like, click, photo and location tag into data points used to inform the sale of microtargeted audiences to advertisers. Unlike cable TV, which could only target audiences based upon the type of programming they tuned into, now advertisers could use individual-level behaviors — including social behaviors — to inform the content, aesthetics and reach of ads.

These economic and technological changes have created a media environment that satisfies the precise conditions that Lazarsfeld and Merton said were requirements for a successful propaganda campaign in 1948. In the past, authoritarian regimes wielded state-run media (hence, monopolization) to utilize propaganda to capitalize on preexisting beliefs, attitudes and fears (hence, canalization). While a state-run media infrastructure does not exist in the United States, thanks to Facebook’s microtargeting features, individual citizens can still find themselves in information environments monopolized by just one side.

The Guardian recently revealed just how specialized these microtargeting efforts can be: “158,909 (or 72 percent) of the ads that [President] Trump’s campaign ran in 2019 were seen by between zero and 999 Facebook users.” And because of the low cost of such microtargeting, these ads are cheap to run. It is possible to create hundreds, and even thousands, of distinct political ads on the same theme — each variety targeted to groups of users that share a unique mix of identities and interests. The Trump campaign did exactly this in 2016, running 5.9 million total Facebook ads from June to November.

Facebook is also a canalization machine. People are targeted based on interests, purchases, demographics, political characteristics — and the synergy between these categories. Facebook’s Custom Audiences feature allows political campaigns to upload lists of users to target with ads. These lists are typically drawn from voter files (provided by parties or consulting firms) and email lists (drawn from supporters or from other campaigns).

Facebook also touts Look-a-like Audiences, which allow campaigns to target entirely new people who look just like the people already on advertisers’ lists. According to Facebook, once advertisers upload a Custom Audience, Facebook’s Look-a-like tool “identifies the common qualities of the people in it … and finds similar people.”

Facebook guarantees the success of these tools by building close relationships with campaign operatives, ensuring that campaigns have the skills to capitalize upon these tools. Trump’s campaign didn’t come up with their targeting plan independently, but by working side-by-side with Facebook employees who facilitated canalization.

There is one more important piece of the puzzle. The two Columbia sociologists told Americans they had little to worry about in 1948 because media propaganda needs to be supplemented by interpersonal communication to be successful. While the audiences for popular radio broadcasts were large, there was little opportunity for show producers to interact with audiences. Nor were there mechanisms for the thousands of radio show listeners to know who else was in the audience, much less to interact.

But social media platforms were built for such interaction. We share stories along with new information, recontextualized from our personal perspectives. We receive stories from news outlets, but that have been shared — and deemed credible by — friends and family. And we respond quickly, reacting to mediated information and sharing it as our own.

Importantly, this is also the aspect of social media that is perhaps most democratizing. By connecting individuals with one other, these platforms facilitate political discussion, collective action and grass roots activism — all of which are healthy aspects of democratic life.

But today’s tech platforms also facilitate all three criteria — monopolization, canalization and supplementation. As such, a successful propaganda campaign, one that could efficiently cultivate racist, misogynist, anti-Semitic, anti-democratic or otherwise dangerous views, is clearly possible.

This makes it incumbent upon regulators to act, and act quickly. Self-regulation isn’t realistic; these companies’ interests are profoundly different from the public interest.

Regulation must limit the extent to which platforms allow political advertisers to target individuals (limiting both the nature and specificity of user affinities used to target ads and the number of voters they can reach at once relative to the size of the electoral contest).

Importantly, there is overwhelming public support for limits on microtargeting. According to new data gathered by the Knight Foundation in concert with Gallup, 72 percent of Americans oppose the use of any individual data for microtargeting by political campaigns. This is a rare nonpartisan issue — 75 percent of Republicans and 69 percent of Democrats believe no individual-level data should be used by political campaigns to target them online.

Successful regulation must also compel platforms to share the criteria being used to create custom and look-a-like audiences. Such transparency would allow counter speech.

Finally, regulation must require and facilitate data access to academic researchers so that we may study these phenomena outside of their corporate watchful eye.

Microtargeting is far from inherently bad. Small campaigns and grass-roots organizations use Facebook’s cost-effective microtargeting tools to build lists of supporters, solicit donations and mobilize volunteers, all of which benefit voters and candidates. But, without rules governing microtargeting criteria and transparency, these same tools can be used to entrench ideologies or mobilize hate groups.

In 1948, it was appropriate for social scientists to tell the public to relax about the exploitability of America’s mass media infrastructure by propagandists. But today, it’s equally appropriate for social scientists to sound the alarm. We need regulation, and we need it now.