Addressing Facebook and Google’s Harms Through a Regulated Competition Approach

April 10, 2020 Anti-Monopoly Policies & EnforcementTech

Working Paper Series on Corporate Power: Paper #2

Imagine a policymaker proposing we place concentrated power over global online communications in the hands of two dominant corporations named Facebook and Google.

These corporations would amplify falsified news stories and pump out individually tailored propaganda, which would influence elections, result in violence, promote fake stories during a pandemic, and even facilitate genocide. They would enable a range of other societal harms: creating virulent new forms of discrimination, harming public health, intimately surveilling adults and children, imposing a private tax on small businesses, using design to addict users to their platforms, and driving the collapse of independent journalism, especially at the state and local levels.

We would dismiss this idea as a threat to our communities, our way of life, and our democracy. And yet, this is the policy path that the United States, and much of the world, has followed over the past decade.

Facebook and Google’s many interrelated harms trace back to two primary causes: their unregulated dominance over key communications networks, and their use of those networks to engage in surveillance and user manipulation to monopolize digital advertising revenue. Nearly every internet user, civil society group, and business uses networks owned by Facebook and Google. Without meaningful competition, they design their networks not to help us communicate, but to addict us to their services in order to sell more advertising. And as advertising revenue plummets during the Coronavirus crisis, Facebook and Google are poised to capture even more power.

Many of the problems Facebook and Google create are chalked up to “technology” or “the rise of the internet,” or described as an impersonal force with its own logic known as “surveillance capitalism.” But public policy choices, not simply technological advancement, led to today’s dystopian communications environment.

The public policy framework that enabled Facebook and Google to exist in their current forms is relatively new. Since the 1980s, policymakers encouraged lax enforcement of merger laws and rollbacks of communications regulation. They also wrote laws encouraging concentration by shielding online communications networks from liability for illegal content that passes over their platforms, while allowing them to profit from advertising sold alongside it.

This paper provides an overview of the policy choices that allowed Facebook and Google to develop a business model toxic to democracy, civil rights, and public health. It also breaks down what often seems to be an overwhelming and unsolvable challenge into a discrete set of solvable problems. Below, we lay out options for structuring online communications networks to mitigate the range of harms Facebook and Google currently create and help ensure they are compatible with a well-functioning democracy.

Addressing the harms induced by these dominant platforms will require policymakers to correct mistakes of the past and take two basic but essential steps. First, they must break up Facebook and Google so they are no longer too big to regulate. Second, policymakers must regulate the resulting market practices to prevent reconsolidation and protect essential values such as privacy, nondiscrimination, free speech, and diversity in the public square.

The solution, in other words, is not to break up these corporations, or to regulate them, but to do both. We must aim at healthy online communications networks through regulated competition.

Addressing the root causes of Facebook and Google’s range of harms is not only necessary to protect democracy, but also politically achievable.

Addressing the root causes of Facebook and Google’s range of harms is not only necessary to protect democracy, but also politically achievable. In fact, lawmakers have already taken significant steps in this direction, with major investigations into the power of digital platforms currently underway in the House of Representatives, among state attorneys general, at the Federal Trade Commission and Department of Justice, and among global regulators and enforcers.

We do not presume to have all the answers, but we hope to advance the debate and help to center it around questions of power at a moment of political significance. We recognize that Facebook and Google’s market power affects numerous stakeholders, many of which have long advocated for various policy approaches. We welcome stakeholder input and critique on the solutions we propose below. And we look forward to continuing the increasingly vibrant dialogue about the best ways to restore regulated competition in media and advertising markets. Our democracy depends on the strength and integrity of this debate.

FACEBOOK AND GOOGLE: WHAT ARE THEY?

Facebook and Google are commonly understood as technology companies, but they are more accurately described as communications networks that sell digital advertising. Advertising makes up the vast majority of revenue for both corporations. Unlike traditional companies that make money from advertising – such as newspapers, television stations, radio, and magazines – these corporations primarily help people and businesses communicate with one another through search, social media, mapping, and other digital networks. Mark Zuckerberg, for instance, originally called Facebook a “social utility” to indicate its broad public purpose.

This paper uses Zuckerberg’s frame of considering Facebook and Google utilities. They are 21st century communications networks, managing the essential infrastructure of the digital world – like a search engine, a social network, or a mapping application – separate from the network of hardware that transmits data. Facebook and Google are the most dominant of these new communications networks, but there are others, such as Snapchat and TikTok, designed around a similar business model.

Unlike traditional communications networks, users don’t pay Facebook and Google directly for services. Instead, Facebook and Google make money from advertising, and have redirected much of the revenue that used to flow to the media industry, especially print and digital news and journalism.

These corporations are uniquely situated to profit from advertising, using a two-part strategy. First, both Facebook and Google own dominant communications networks. Using social networking to communicate means in large part using a Facebook network, and using search, online video, or mapping means using a Google network. Facebook and Google also have control over the user interfaces on which users rely for a suite of services, meaning these corporations can arrange the way users find information or communicate with friends and family to facilitate them seeing ads.

Lack of competition is essential to this dominance. For example, in the early 2000s, Facebook competed with MySpace over privacy and safety. Users could and did flee MySpace for Facebook. Today, despite Facebook’s bevy of data-related scandals, the large and sustained increase in the number of ads it shows users, its tolerance of disinformation, and its elevated levels of surveillance, users have nowhere else to go.

What is a 21st century communications network?

Facebook and Google each operate multiple communications networks essential to the well-being and commercial prospects of billions of people. Their search facilities, social networks, maps, and video sharing services serve as key online interconnection points to facilitate the exchange of information among individuals and businesses. These services become regulable as communications networks when they achieve sufficient scale as to be a meaningful infrastructure platform for a community, and when there are fewer than three reasonable alternatives through which to easily gain or share unique and critical pieces of information.

As the Supreme Court noted in Munn vs Illinois (1874), “When the owner of property devotes it to a use in which the public has an interest, he in effect grants to the public an interest in such use, and must, to the extent of that interest, submit to be controlled by the public, for the common good, as long as he maintains the use.” America has a long history of regulating the actions of private businesses for the public good, from colonial times when colonial legislatures placed limits on the prices and offerings of ferries, common carriers, bakers, millers, and innkeepers, to the rate regulation of AT&T by the Federal Communications Commission in the 20th century. There is no reason to allow Facebook and Google to be exempt from traditional regulatory obligations.

Second, Facebook and Google use their dominant position as gatekeepers to the internet to surveil users and businesses, amass unrivaled stores of data, and rent out targeting services to third parties who can then target content – from ads for shoes to racist propaganda – at users with a perceived precision unrivaled by any other entity. This dominance grants Facebook and Google immense bargaining leverage with publishers and other stakeholders, who often have no choice but to hand over their own proprietary data to the platforms.

Advertising has always been a dangerous way to finance news gathering, because large advertisers can attempt to control what editors publish. To address this ethical risk, publishers have undertaken a range of mitigating strategies. Advertising and editorial departments are often separated, professional guilds work to protect journalistic integrity, and policy encouraged a diversity of media outlets so that a corruption of one would not corrupt all of public discourse.

While advertisers don’t control Google and Facebook in the same way that they seek to control editorial content, this same ethical dilemma – the desire to distort the flow of information – takes a different form when applied to communications networks. The distortion occurs through business models reliant on excessive user engagement. The longer users remain on the platform – hooked on sensationalist content, which the platforms’ algorithms prioritize – the more money Facebook and Google make from advertising. This is where the false content, surveillance, addiction, and so forth originate. They are not unfortunate byproducts of the business model; they are core characteristics of it, essential to these corporations’ ad-based revenue models.

Facebook and Google are operating new types of essential communications networks that profit from amplifying untrustworthy, sensationalist, and addictive content, while destroying legitimate sources of news essential to a well-informed citizenry and introducing a range of other social harms.

In other words, Facebook and Google are operating new types of essential communications networks that profit from amplifying untrustworthy, sensationalist, and addictive content, while destroying legitimate sources of news essential to a well-informed citizenry and introducing a range of other social harms.

A LEDGER OF HARMS

Facebook and Google’s market power and advertising-and-surveillance business model are at the root of a diverse yet interrelated set of individual and societal harms. The silo-like organization of policy conversations and expertise – privacy, discrimination, public health, monopoly, journalism, and so on – obscures this dynamic. Below we lay out a non-exhaustive list of some of the major ongoing problems that Facebook and Google cause.

SPREADING MISINFORMATION DURING A PANDEMIC. In early March, Facebook posts included fake news about large numbers of concealed cases in Taiwan, and doctored images of President Tsai Ing-wen going into quarantine after testing positive. Despite attempts to fact check posts and videos on their platforms, Global Disinformation Index published survey results that found Google providing ad services to 86% of the sites carrying coronavirus conspiracies.

INVADING USER AND BUSINESS PRIVACY. Facebook and Google know far more about the public than the public knows about Facebook and Google. This asymmetry of information enables unaccountable surveillance, manipulation, data breaches, and abusive behavior, for which the concept of privacy often serves as a catch-all.

DESTROYING LOCAL AND INDEPENDENT JOURNALISM. A vibrant and independent press is essential to maintain an informed citizenry but is in a state of collapse in democratic societies around the world. The primary cause of this collapse is Facebook and Google’s control of digital advertising revenue. Because Facebook and Google’s market power allows them to collect granular information on users for advertisers, these two corporations now command nearly 60 percent of all digital advertising revenue.

PROPAGATING FAKE NEWS, DISINFORMATION, AND POLARIZATION. Facebook and Google spread, amplify, and profit from falsified and divisive content. For example, in 2016, Russia executed “an extended attack on the United States by using computational propaganda to misinform and polarize US voters.” YouTube, which is owned by Google, has recommended Alex Jones’s videos involving conspiracy theories more than 15 billion times.

FACILITATING DISCRIMINATION. Facebook and Google’s business model inherently facilitates discrimination. Facebook’s model of personally targeted behavioral ads enables discrimination based on a variety of protected categories. In the past, Facebook allowed advertisers to exclude people based on their neighborhood “by drawing a red line around those neighborhoods on a map.” Other elements of users’ profiles can easily approximate race, gender, age, or sexuality for advertisers who seek to discriminate against different classes of people.

ENABLING UNACCOUNTABLE POLITICAL CENSORSHIP. Facebook and Google’s unilateral control over their platforms allows them to regulate the political speech that appears on those platforms. Because Facebook and Google are untransparent and unaccountable, trusting them to regulate political speech or content is extraordinarily dangerous. The goal must be to eliminate their ability to censor and shape our public discourse, not empower it.

IMPOSING A START-UP AND SMALL BUSINESS TAX. Traffic from Facebook and Google is essential to reach customers across the economy, putting businesses that don’t advertise on Facebook and Google at a competitive disadvantage. But as the founder of one small business put it, “Google allows competitors to purchase ads on our trademark, blocking and misdirecting consumers from reaching our site.” In other words, Facebook and Google operate as phone directories, only when a user dials a number for a business, Facebook and Google direct the phone call to whichever third party pays them the most.

REDUCING INNOVATION. Facebook and Google acquire or undermine start-ups that might compete with their products, creating what venture capitalists call a “kill zone” around sectors adjacent to these big tech giants. They undermine entrepreneurship, as economist Hal Singer puts it, by “favoring their affiliated content, applications, or wares in their algorithms and basic features.”

HARMING MENTAL HEALTH. Because Facebook and Google make more money from advertisers the longer users spend consuming advertising on their platforms, they have strong incentives to deploy other tactics that promote addiction and unhealthy use of their platforms. Both Facebook and Google have also facilitated radicalization by feeding users increasingly sensationalist and conspiratorial content.

HARMING PHYSICAL HEALTH. Facebook and Google directly harm physical health. They allow advertisers to target people likely suffering from eating disorders with diet pills, people recovering from opioid addiction with opiates, those likely to be skeptical of vaccines with anti-vaccination disinformation, and people living with HIV/AIDS with sham medications, to name just a handful of examples.

HOW DID WE GET HERE?

To understand how to neutralize Facebook and Google’s dangers to society, it is important to understand a brief history of telecommunications regulation and antitrust enforcement. Examining the evolution of regulatory and enforcement regimes designed around other types of communications networks – the post office, newspapers, radio, television – helps inform the path forward.

For roughly two hundred years, with the creation of the U.S. Postal Service in 1792, American policymakers have sought to ensure a decentralized and independent media communications environment. The First Amendment shielded private speech from government censorship. Equally important were public policies to structure markets for advertising, media, and communications. Policymakers subsidized media through universal low-cost or free distribution of information via the mail, promoted neutrality in critical speech platforms, and prioritized diversity of speech. Advertising revenue served as an important shield for publishers against financial control from the government and the wealthy.

Throughout most of the 20th century, a combination of telecommunications policies and advertising markets supported a surfeit of radio and television stations, as well as a host of local newspapers and specialized magazines. Until recently, between 60 and 80 percent of the revenue for newspapers came from advertising. Policymakers also implemented variations of the Fairness Doctrine, a regulation that mandated equal time for political views on regulated broadcast channels, reflecting a desire to safeguard concentrated media ownership from fusing with partisan political interests.

From the 1930s to the 1970s, the Federal Communications Commission pursued three main policy goals. As described by Supreme Court Justice Thurgood Marshall, one was diversity of speech via diversity and local ownership of media platforms. Another was “the best practicable service to the public” consistent with the technology of the day (FCC v. National Citizens Committee, 436 U.S. 775 (1978)). For communications networks, universality of service was yet another goal.

Congress and regulators sought to decentralize ownership and control of media markets, often when these industries were in their infancy. The Radio Act essentially forced AT&T, which sought to leverage its telephone infrastructure to dominate this new medium, to sell its nascent radio network. In 1940, the Federal Communication Commission imposed strict ownership caps on FM stations.

In 1942, the FCC and the Department of Justice forced NBC to spin off its second national radio network, which turned into ABC, a national competitor for eyeballs and ad dollars. In 1953, the FCC wrote the so-called 7-7-7 rules, which prohibited anyone from owning more than 7 television licenses, 7 AM radio licenses, and 7 FM radio licenses nationwide. The FCC updated these rules to include cable systems in 1970. In 1975, the FCC blocked any company from owning a newspaper and TV or broadcast station in the same market.

Regulators also reduced network operators’ power over the flow of advertising and information, thus preventing operators’ from using that power to extort or bully other market participants. In 1941, the FCC blocked national radio networks from seizing control of advertising sales and programming choices, allowing local affiliates to more easily substitute local programming for network offerings. These rules were intended to make it easier for affiliates to sell non-network advertising. Radio-network publicists prophesized doom for the industry, arguing it was a prelude for a government takeover of the airwaves. Instead, revenue and the number of broadcasters soared.

Regulators and antitrust enforcers also imposed a wide variety of rules that restricted the power of any particular corporation in telecommunications and advertising markets. These restraints specifically promoted decentralization and fair competition. For instance, the Justice Department filed a suit against a monopoly on outdoor advertising, as well as a suit to stop price fixing in the typesetting business. In another case, the Justice Department sued a group of businesses for monopolizing supplies and services to rural newspapers (United States v. General Outdoor Advertising Company, Inc. (1950); United States v. Thomas P. Henry Co. (1950); United States v. Western Newspaper Union, et al, (1951)).

Regulators also prevented one part of an industry from controlling the whole media supply chain, or what is known as vertical integration, in both television and film. In 1970, the FCC adopted Financial Syndication rules, which stopped TV networks from owning the programming aired in prime time. As a result, TV networks had to buy their prime time programming from independent production companies and studios throughout the 1970s and 1980s, which gave artists more negotiating leverage and helped open television production to new voices. Decades of litigation also broke up the motion picture industry, separating production from distribution and ending a series of abusive practices, like requiring theaters to show unwanted films as a condition of showing sought-after blockbusters (United States v. Paramount Pictures, Inc., 334 U.S. 131 (1948)).

Policymakers also prohibited certain false and deceptive practices on broadcast speech platforms. In 1960, after a series of pay-to-play scandals, Congress eliminated a conflict of interest in advertising markets by outlawing the practice of “payola,” or undisclosed sponsorship payments to radio DJs, stations, and television personalities and shows. With this limit on licensed broadcast channels, advertising would be aboveboard and disclosed.

Enforcers and regulators also sought to intervene in high-technology markets to block monopolization at its incipiency. The Justice Department prohibited AT&T from competing outside its communications business in 1956, in part to prevent the company from leveraging its monopoly power in one market to advantage itself in others. This stricture forced the corporation to license its patents to the burgeoning electronics and digital computer companies, allowing them to grow without interference from AT&T. The FCC also sought to stamp out abusive behavior in the nascent online services market. Standards for new technologies, like those underlying fax transmissions, email, and the nascent internet, were often open, allowing competition within these new markets to flourish.

Ensuring that power in the media and telecommunications industries was not concentrated was, and should remain, a critical baseline for protecting and expanding democracy in America.

This system still had significant deficiencies and injustices. Local newspaper monopolies often had excessive local control over advertising and news flow, and structural racism and sexism limited the availability of advertising revenue and infrastructure to support publications and media outlets challenging the status quo. Publications like the legendary African-American newspaper The Chicago Defender never faced anything but a very challenging financial existence, and people risked lives and livelihoods to distribute it in the South because white-owned firms would not carry it. Dramatically unequal levels of access to capital that fell along racial or gender lines presented an additional barrier to creating a just media ecosystem. But ensuring that power in the media and telecommunications industries was not concentrated was, and should remain, a critical baseline for protecting and expanding democracy in America. Without decentralization, the system will never be just.

THE RISE OF MONOPOLY POWER

In the late 1970s, policymakers reversed their presumptions, increasingly deferring to assertive capital market actors on Wall Street. This shift manifested in the loosening of FCC rules capping media ownership and mandating political speech neutrality on publicly licensed airwaves. The subsequent consolidation of media outlets beginning in the 1980s turned into a merger tidal wave after the passage of the Telecommunications Act of 1996, which lifted caps on media ownership.

Underpinning this transition was an ideological transformation driven by the “law and economics movement,” which originated at the University of Chicago and successfully reoriented policymakers’ approach to structuring markets. Under the influence of writers like Robert Bork, policymakers, on a bipartisan basis, shifted the main goals of antitrust away from ensuring decentralization and fair rivalry within markets. Instead, they sought to allow corporate concentration within and across markets, often under the assumption that market dominance signaled nothing more than efficiency.

In industries created after this framework became dominant, such as the personal computer industry, concentration became extreme. Competitors battled to privatize technical standards, creating monopolies around these complex specifications. While the standards underlying fax and email were open so that anyone could create fax and email products, the operating system for personal computers and the standards underpinning social media were under the control of individual corporations. Modern technology corporations sought to compete for the market itself, rather than within the market.

Antitrust enforcers also weakened merger law, viewing mergers in terms of their presumed impact on consumer prices instead of their impact on the competitive process. This pivot set the stage for corporations like Facebook and Google, which offered tools at no monetary cost to consumers, to go on merger sprees unencumbered by meaningful challenges.

The Reagan and Clinton Administrations also chipped away at the regulatory structures and vertical and horizontal restraints that had kept telecommunications networks partitioned and decentralized. Part of this involved the Reagan Administration’s confusing attack on AT&T, which appeared to be a break-up of a monopoly. However, the underlying goal was to separate the regulated business components of AT&T from its other divisions as a means of deregulating telecommunications markets. The Reagan administration also loosened media ownership restrictions and eliminated the Fairness Doctrine.

The Clinton Administration continued Reagan’s approach, with the Telecommunications Act of 1996 enabling consolidation in telecommunications and broadcast industries. The law included a little noticed provision, Section 230 of the Communications Decency Act (which is a part of the overall law) to enable the management of online chat rooms. Section 230 allowed providers of “interactive computer services” to be free from liability for speech on their platforms. Section 512 of the Digital Millennium Copyright Act offered a parallel shield for platforms to avoid liability for copyright violations on their networks.

These shields led to an explosion of innovation. Platforms could experiment without worrying, as publishers must, that the content they monetized was illegal or included copyright violations. Importantly, these laws also enabled digital-era communications networks to finance themselves with advertising revenue, but without any liability for the content they facilitated.

While the Telecommunications Act eliminated a variety of restraints in media markets, it did succeed in blocking telephone companies from spying on customers for marketing purposes through what were known as Customer Proprietary Network Information rules. However, these protections have not been extended to surveillance and advertising online platforms. The Clinton Administration also brought suit against Microsoft for antitrust violations (United States v. Microsoft Corp., 253 F.3d 34 (D.C. Cir. 2001)). This suit created the opportunity for new young corporations such as Google to succeed. But the ultimate failure of the suit, combined with an increasingly conservative Supreme Court’s further narrowing of antitrust law, meant that the burst of innovation unleashed by the case was followed by the growth of new, even more powerful monopolies.

During the 1990s, regulators and policymakers were somewhat aware of possible anticompetitive discrimination in the physical infrastructure of internet access, which led to the battle over “net neutrality,” and the question of control over the physical infrastructure of the internet. But they did not recognize the possibility of centralization over the flow of information in the content layer of the web. They did not see the possibility of monopolization and centralization among advertising networks, search engines, or other nascent communication network applications.

Ironically, within the industry, the moral dialogue about market structure was prescient. In 1998, Google founders Larry Page and Sergei Brin made the most cogent case against advertising-financed business models in their paper describing the underpinnings of Google’s search engine. The key problem with ad-financed search indexes was that they would engage in self-dealing; search engines financed by ads were “inherently biased towards the advertisers and away from the needs of the consumer.” Page and Brin also identified that search engines would seek to prevent users from leaving their properties to sell more advertising. This problem is what would later be identified as engagement, the need to create algorithms to keep users engaged on a platform. But policymakers, if they acknowledged these arguments, did not incorporate them in any coherent regulatory framework.

Because of the ideological dominance of the law and economics movement, even under Democratic administrations the Federal Trade Commission (FTC) viewed consumer protection as a question of adequate disclosure regimes, focusing on solutions that required consumers to be notified when their data was collected. The FTC largely did not consider that, as companies accumulated data on their users, they would be able to structure bargaining power – to water down or eliminate competition – among different agents in the marketplace, including publishers, platforms, and users. Instead, market power was inappropriately shoe-horned into the framework of privacy law, where it remains today.

The embrace of self-regulation set up a contest in which competition within the online advertising market could take place based on self-dealing and conflicts of interest.

In 1999 and 2000, the FTC held a series of workshops and issued reports on the nascent online behavioral advertising market, with the outcome being industry self-regulation in the context of privacy. In 2007, the FTC offered updated principles to guide self-regulation in the now-thriving market for behavioral advertising. These principles were further extended in an FTC staff report in 2009, under a Democrat-majority agency. The embrace of self-regulation set up a contest in which competition within the online advertising market could take place based on self-dealing and conflicts of interest.

This policy environment enabled relative newcomers Facebook and Google to roll up the online advertising industry (see Appendix). From 2004 to 2014, Google spent at least $23 billion buying 145 companies, including Maps, Analytics, YouTube, Gmail, Android, and, critically, its 2007 purchase of DoubleClick. Google’s DoubleClick acquisition, which the FTC approved, gave Google control over the plumbing used to deliver ads from advertisers to publishers in the display ad market. Google’s DoubleClick acquisition, which was approved by the Federal Trade Commission, gave Google control over the plumbing used to deliver ads from advertisers to publishers in the display ad market. Prior to this acquisition, online advertising was a segmented and competitive series of markets. The merger radically concentrated power over the flow of information and advertising. Google then combined DoubleClick data with its search and Gmail data into data sets that gave ad buyers unrivaled information with which to reach potential customers.

Facebook, too, acquired competitors without regulatory intervention. Most notably, in 2012, the Federal Trade Commission unanimously approved Facebook’s $1 billion purchase of Instagram.32 Mark Zuckerberg bought the company because he saw it as a competitive threat to his social networking business; in other eras, such a merger would have been challenged.33 A year later, in 2013, Facebook acquired a company called Onavo, allowing it access to granular data on how people used rival apps in order to monitor potential competitive threats. Then in 2014, Facebook paid $19 billion for the secure communications messaging service WhatsApp, a popular program used by large numbers of people around the world to send private text-messages to one another, and which posed a potential threat to Facebook’s original social network.

The result is that Google and Facebook enjoy extraordinary market power, with users firmly locked into their services. Today, Google has eight products with more than a billion users each, and Facebook has four products with more than a billion users each. Acquisitions have allowed these two corporations to control the richest and widest data sets on human populations ever assembled. Their reach, combined with data, gives these platforms gatekeeping power over who wins and loses online. These advantages insulate them from basic market-based accountability mechanisms, like competition from rivals for users and advertisers. Facebook and Google can increase the number of ads in their services or reduce quality of products through elevated surveillance levels without consequence.

SOLUTIONS

Below, we propose three key objectives for policymakers and enforcers that, taken together, will help address Facebook and Google’s excessive accumulation of power and the range of harms that derive from it.

ONE: INVESTIGATE ONLINE AD AND MEDIA MARKET STRUCTURE

Facebook and Google know far more about advertising markets than anyone else, including policymakers. An aggressive set of investigations over the structure of online advertising and media markets using granular business data would answer key questions. How do Facebook and Google take fees in opaque and automated marketing for online advertising? How do they use business or personal data? What are the specific business relationships and contractual terms that underpin their market power?

Across the globe, enforcers and regulators are undertaking studies or initiating court cases that expose aspects of how these corporations operate. Germany, France, the European Union, Israel, India, Singapore, Russia, Mexico, and Australia have all started down this path. In the United States, the U.S. House Judiciary Committee’s Subcommittee on Antitrust, Commercial and Administrative Law is currently engaged in a bipartisan investigation of Facebook, Google, Apple, and Amazon.

But more tools are available. To help guide policymakers, the FTC should use its authority under Section 6(b) of the FTC Act to study the online advertising and media industry. The FTC has a long history of critical investigative and report-writing work. Its investigations led to the Packers and Stockyards Act of 1921, the Grain Futures Act of 1922, the Securities Act of 1933, the Robinson-Patman Act of 1936, the Stock Exchange Act of 1934, and the Public Utilities Act of 1935 – all legislation that restructured industrial or financial sectors.

Such an investigation would require a sectoral analysis of each individual platform and the market in which it operates. These analyses should provide a rich understanding of platforms underpinning online video, maps, search, third-party tracking, location tracking, app stores, ad data, and ad exchanges. In addition, such analyses should be particularly attentive to racial or gender biases that might be embedded in allegedly neutral algorithms.

TWO: RESTORE ACCOUNTABILITY THROUGH STRUCTURAL SEPARATIONS AND MEASURES TO ENHANCE COMPETITION

Facebook and Google are complex and powerful institutions that are too big to regulate. Simplifying the business models through separating out various lines of business into independent corporations would enable policymakers to understand and regulate these institutions. Establishing and enforcing additional rules preventing reconsolidation is also necessary to restore market-based accountability.

There are several ways that Facebook and Google can be structurally separated:

  • SEPARATION BY FUNCTION. Facebook and Google subsidiaries can be separated out based on function. For instance, splitting out general search from mapping, Android, and YouTube could be one approach. The advantage of function-based structural separation is that individual divisions in Google, for example, would no longer have an incentive to privilege other divisions within Google. General search, for instance, would no longer automatically funnel addresses or local search terms to Google Maps, but could choose other partners based on open business terms.
  • BUSINESS LINE SEPARATION. Facebook and Google subsidiaries could be separated out along existing technical business lines to enhance competition among horizontal competitors. Facebook, Instagram, and WhatsApp, for example, could be separated to heighten competition over quality, such as improved privacy settings, reduced surveillance, the quality of their mobile applications, or the quantity of ads.
  • VERTICAL SEPARATION. Some business lines could be separated between production and distribution. For example, general search can be divided into the search engine web page Google.com, the underlying web crawl (the unseen function that indexes internet sites for searching), and the ad feed. This division would enable other firms to license the underlying web crawl, creating a new and open competitive market of search engines that might compete on the quality of their interface or on improvements to the underlying search technology. As recently as 2002, Google supplied the underlying search technology to Yahoo, while Yahoo served as a retail brand powered by Google.

None of these approaches are mutually exclusive. Functional, business line, and vertical separation can all be done at once. Such changes would result in new, independent companies – Instagram, WhatsApp, Facebook, Messenger, Maps, Google Search technology, Google Search Engine, YouTube, Android/Play, Google Advertising, Analytics, Drive, Chrome, Gmail, and infrastructure services – that can innovate around a variety of business models.

There are several legal pathways to achieve structural separations. They include:

  • ANTITRUST ENFORCEMENT. Large bipartisan groups of state attorneys general are currently undertaking broad antitrust investigations into Facebook and Google. These investigations could – and should – result in structural separations as a remedy. The Justice Department and the FTC are also reviewing antitrust concerns and could file suit. While conservative courts could reasonably be expected to blunt aggressive remedies, this outcome is not foreordained and should be aggressively tested.
  • STATUTE. Congress could force break ups through approaches modeled on past legislation, like the Public Utility Holding Company Act of 1935, which precluded utility-holding companies from operating across state lines. Congress or the FTC could also impose interoperability requirements to end the chokehold over the flow of information Facebook and Google currently command. Bipartisan legislation, the ACCESS Act, has been introduced to accomplish this goal. The National Institute of Standards and Technology (NIST) and the FTC could also serve as a convening body for the development of interoperability standards. NIST has promulgated standards recently on fingerprinting technology, electricity grid interoperability, industrial control system security, and cybersecurity.
  • REGULATION. Finally, the FTC could act under its Section 5 authority to prohibit unfair methods of competition. This section of the law provides fairly open-ended authority and could be used to bar business models with embedded conflicts of interest.

Separating Facebook and Google into their component parts would reduce their power, but even Facebook’s social network or Google search as standalone corporations would be too powerful. Additional steps are necessary to promote competition in each resulting market. They include:

  • INTEROPERABILITY MANDATES. Their status as closed networks with more than a billion users each fortifies Facebook and Google’s market power. After policymakers have reduced the power of these networks by separating them out into simpler lines of business, interoperability requirements can reduce the power of any one line of business. For new entrants to succeed, Facebook and Google’s services should be interoperable with those provided by startups and smaller companies, so users can connect and communicate with other users still on dominant platforms. This approach is similar in concept to the way in which a phone operating on a Verizon network is interoperable with one operating on an AT&T network. Interoperability is essential for facilitating new market entrants that would otherwise be locked out as a result the network advantages that incumbents create through acquisitions and market power.
  • REGULATE DATA AS AN ESSENTIAL FACILITY. Large stores of data also underpin Facebook and Google’s market power. After policymakers have reduced the power of these networks by separating them out into simpler lines of business, ensuring equal access to data constricts the power of any one line of business and fosters innovation.Currently, the debate over privacy is confused with the debate over the competitive playing field. We can choose to restrict the use of private data or enable it. Right now, however, there are essentially two different privacy regimes. Facebook and Google can use intrusive private data, while other corporations cannot. The point of regulating data as an essential facility is to ensure equal access among all businesses to essential data. For example, if Google’s underlying mapping data were publicly available under fair, reasonable, and non-discriminatory terms (FRAND, an established legal concept) as proposed by Senator Mark Warner and others, the mapping space would likely have a range of differentiated, competitive offerings. Whatever choices we make about privacy should remain distinct from using any particular decision about privacy to provide one corporation with a competitive advantage based on superior access to data.
  • BAN ABUSIVE PRACTICES. Preventing platforms from controlling publishers, businesses, and users requires far stronger enforcement of rules against abusive behavior. This step includes updating standards for false and deceptive practices in user interface designs that incentivize addiction or deception, banning unreadable end user license agreements, ending the practice of communications networks favoring its own products or services or forcing companies selling their own products to pay for their own trademark in search terms, and prohibiting payola-style deals such as Google’s payment to Apple to set the default search engine for Safari to Google’s product.

THREE: CREATE NEUTRAL PLATFORMS BY REGULATING BUSINESS DATA RULES AND ADVERTISING MARKETS

Facebook and Google’s current business model relies on surveillance, addiction, discrimination, and manipulation. To turn these platforms into safe, neutral networks for communication, policymakers must focus on regulating advertising practices, and in particular on the dangers inherent in targeted advertising. This could take a number of approaches:

Ultimately, communications networks like Facebook and Google should be prohibited from advertising altogether. A simple prohibition on communications networks operating advertising businesses would eliminate the incentive to surveil and manipulate users. As one example of how such a rule has been enacted in the past, telecommunications regulations used to ensure local phone networks could not exploit knowledge of people or firms its customers contacted for marketing purposes. Similarly, a dominant communications network like Facebook or Google should have no incentive to misuse user data, nor should they interfere with the communications of its customers based on the payments of a third party. Although Facebook and Google are primarily financed through advertising, both have non-advertising revenue they could amplify through subscriptions or so-called “freemium” approaches that mix free and paid services.

  • BAN FACEBOOK AND GOOGLE FROM MOST FORMS OF AD TARGETING. Because a blanket ban on advertising would be too disruptive, preventing communications networks from engaging in most forms of targeted advertising could serve as an intermediary step. Such a ban would dramatically reduce incentives to collect and store user information. Facebook and Google would move to what are known as “contextual advertising business models,” displaying ads based on the context of the publication or web page, rather than the identity of the user. While barring behavioral advertising would not address a host of harms, such as the incentive to self-deal or the over-utilization of online services, it could act as a bridging policy to minimize disruption before a more comprehensive ban takes effect. Other policies would need to be organized to address third party data brokers and other businesses that rely on invasive surveillance.
  • IMPLEMENT A “DO NOT TRACK” LIST. This approach would be similar to a ban on third-party targeted advertising, although it would allow users to opt-in to targeting with ads using third party data. This solution should be a complement rather than a substitute to structural separation.
  • MAKE FACEBOOK AND GOOGLE RESPONSIBLE FOR CONTENT. Communications networks should be forced to make a choice between being regulated as a communications facility or as a publisher. Requiring this decision means modifying Section 230 of the Communications Decency Act and Section 512 of the Digital Millennium Copyright Act so that large communications networks do not receive liability protection if they profit from advertising or targeted advertising. This modification would likely force Facebook and Google to change their business model. Amending Section 230 and Section 512 would also help restore a level playing field for publishers, who are legally responsible for the content they publish.

A NOTE ON OTHER PROPOSALS

While well-intentioned, solutions that accept the inevitability of Facebook and Google’s monopoly power and business model are ultimately inadequate and, in some cases, could fortify their dominance. We offer brief commentary on several popular approaches:

  • REGULATION THROUGH A NEW OR EXISTING AGENCY WITHOUT ADDRESSING FACEBOOK AND GOOGLE’S POWER IS INSUFFICIENT. There is a great temptation to attempt to match the monopoly power of tech platforms with the regulatory power of the state, particularly in the context of protecting people from dangerous content or privacy violations. Such an approach would lead inevitably to regulatory capture. Google and Facebook know much more about their business than enforcers and regulators, so their power must be broken and fragmented before regulatory approaches can succeed.
  • PURSUING A PRIVACY-ONLY APPROACH WOULD BE INCOMPLETE. As long as Facebook and Google profit from a business model fueled by surveillance and data collection, strong incentives will exist to violate user privacy and evade regulations. This approach also risks falling short due to lax enforcement of privacy laws. There are, however, policy frameworks centered around protecting privacy that would also restructure Facebook and Google’s business models, such as an opt-out Do Not Track feature.
  • BANS ON POLITICAL ADS HAND POLICING POWER TO FACEBOOK AND GOOGLE. Banning political ads, or banning microtargeting solely for political ads, is not a safe long-term approach. Deferring to Facebook or Google to determine what constitutes political content is inherently dangerous and subject to self-serving bias and manipulation. Since there is no hard and fast rule for determining what is political in nature, voluntary strictures risk skewing public discourse in dangerous and unpredictable ways. This approach would require a public authority to determine what is political – an inherently fraught and likely impossible task – as well as extensive audits and policing of extraordinarily complex platforms.
  • FUNDING NEWSGATHERING BY TAXING PLATFORMS WON’T PRESERVE A FREE PRESS. State funding for journalism — either discretionary or through some sort of tax on targeted advertising — can be dangerous. A revenue stream derived from online targeted advertising creates an incentive for speech platforms to protect the business model of Facebook and Google. Even state funding, without any link to online advertising, creates a dependency of the free press on the state. Advertising is a mechanism to finance newsgathering and shield that newsgathering from the state. State or philanthropic funding can supplement a diverse free press, but it is not a sustainable substitute for a vibrant and decentralized advertising market.

A MOMENT OF OPPORTUNITY

Today, policymakers and enforcers around the globe are pursuing investigations and new policy approaches focused on the sources of Facebook and Google’s power and abuse: their reliance on digital advertising to generate profit and dominance in this digital advertising market. We hope this paper establishes several key claims: 1) Facebook and Google’s broad range of harms to society are interrelated and caused or exacerbated by their focus on digital advertising and dominance of it; 2) Facebook and Google’s dominance and business model are the result of repeated, ideologically-driven regulatory and law enforcement mistakes, rather than inevitable technological developments; and 3) a critical moment of political opportunity exists to spur policymakers to address the harms at the root by building consensus around an approach of regulated competition.

We welcome feedback and opportunities to learn from others who are developing solutions to the challenges that Facebook and Google pose. We’re grateful to the many individuals whose writing and ideas have informed this paper, and to the broader community of policymakers, experts, advocates, business people, and tech workers who all seek to make the technology captured by Facebook and Google serve rather than subvert our democracy.

 

APPENDIX

Below is a list of Facebook and Google acquisitions collected using public records. The data comes from the companies’ annual investor reports, 10K financial reports, press releases, and general industry news coverage. Due to the small size and valuation of many technology startups, some acquisitions did not meet the reporting threshold, and therefore are not included on this list. For similar reasons, many of the listed acquisitions have undisclosed financial terms. Despite the incomplete data, the list of over 300 acquisitions portrays Google and Facebook’s control over new ventures in their respective marketplaces.

*Please see PDF version for appendix.