BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The YouTube Dilemma: When Digital Advertising And Objectionable Content Collide

POST WRITTEN BY
Brian O'Kelley
This article is more than 6 years old.

“They need to stop just making money out of prurient, violent material.”

That was the assessment of Britain’s Foreign Secretary, Boris Johnson, in response to revelations that many of the world’s largest brands inadvertently found their digital advertisements appearing beside content that openly endorses violence, terrorism and hate speech. Even the British government itself demanded to know why ads for the BBC and the Royal Navy rolled on video content produced by “rape apologists, anti-Semites and banned hate preachers."

At the center of the controversy is Google, the internet behemoth that owns and operates YouTube.

While the bulk of public attention has fallen on objectionable content that Google allows on YouTube, several major agencies and brands also announced that they will suspend or scale back their activity on AdX, Google’s third-party digital advertising exchange.

AdX is the world’s largest marketplace for digital advertising. As such, it enjoys outsized influence on how the internet is funded. If Google cleans up YouTube but neglects to do the same to AdX, the internet will suffer for it. To be fair, Google has made important investments over the years into inventory quality technology and initiatives. It stands to be seen whether it will apply these capabilities consistently and uniformly.

But let’s back up. To appreciate the current dilemma, it’s important to understand how digital advertising operates.

Before the advent of the internet, the purchase of advertising – whether print, broadcast, or out-of-home – was largely a manual process. Brands, often represented by agencies, bought advertising inventory directly from trusted outlets. In many countries, broadcast outlets were, and still are, highly regulated; they face stringent restrictions on the images and language they are permitted to employ. Given this dynamic, traditional advertising carried minimal brand risk, since a buyer basically knew what to expect when placing an ad on the back cover of Time or during ABC’s Wide World of Sports.

But as consumers increasingly adopt digital channels to consume news, film, music and information, manual processes alone are insufficient to power advertising. Every time a consumer clicks on a page or opens an app, he or she produces multiple ad “impressions,” or opportunities to place an ad where the user will see it. To deliver billions of ads against billions of impressions in real time, each day, buyers and sellers increasingly use “programmatic” technology that stages an auction for each impression, decides what creative to show the end user, and delivers the ad – all in real time.

The problem for brands, of course, is that it’s no longer as easy to know where campaigns are appearing. There are literally millions of websites and apps. And some of them publish material that no brand would want its name associated with. That’s the AdX problem in a nutshell.

Part of the responsibility lies with advertisers themselves. Working with credible advertising technology partners (including third-party verification services), buyers of digital advertising can operate in what the industry terms a “white-listed environment” – meaning, they can bid only on a pre-determined list of domains that have passed quality stress tests. But exchanges also need to own up to their responsibility.

Despite the intricate technology infrastructure on which they’re built, advertising exchanges are digital marketplaces, and in this sense no different from any other marketplace that connect buyers and sellers. Most such marketplaces impose strict quality controls that govern what sellers can and cannot offer on their technology exchange.

Amazon is a case in point. Prohibited categories include pornography, illegal items, stolen goods, pirated content and “offensive material.” To be sure, operating a clean marketplace sometimes requires imposing subjective judgement, but as Amazon’s guidelines explain, “what we deem offensive is probably about what you would expect.” You can sell a controversial book on its marketplace, but you can’t sell “crime-scene photos or human organs and body parts.”

For its part, eBay forbids the sale of alcohol, drugs and drug paraphernalia, and “offensive material,” a category that includes “ethnically or racially offensive material and Nazi memorabilia.” Etsy provides its sellers with “House Rules” that ban items that “present legal risks to our community” or “are inconsistent with our values, are harmful to our members, or simply are not in the spirit of Etsy.”

Marketplace quality is not just about what can and can’t be sold; it’s about ethical commercial practices. Most of the aforementioned companies impose transparency requirements on buyers and sellers. Amazon, for instance, requires that marketplace participants not “misrepresent” themselves or their products.

Most reasonable people would probably agree that these are good rules. So why don’t they apply equally on digital advertising exchanges?

In the interest of full disclosure, I am not a neutral party. My company operates one of the world’s largest digital advertising marketplaces and we’ve made large investments in developing automated and human processes to weed out domains that promote piracy, hate speech, pornography, graphic violence, or deceptive acts in commerce (a category that includes “fake news”). We also impose transparency requirements that make it difficult for sellers to misrepresent the inventory they bring to our exchange.

Like other digital marketplaces – be they Amazon, eBay, or Etsy – we can’t promise that noncompliant actors will never find their way onto our exchange, but we do try to enforce inventory quality measures consistently, on an ongoing basis, and with new and ever-more aggressive data science tools. Even when it’s uncomfortable – as was the case when my company blacklisted the political site Breitbart.com, because it violated our hate speech prohibition – we endeavor to place the brand safety of our advertisers above the expediency of momentary gain.

Running a clean marketplace is hard. It requires ongoing investments. It probably means leaving money on the table by declining to work with certain sellers or by excluding whole categories of inventory. But it’s the right thing to do, and the stakes are high.

Digital advertising funds much of the world’s journalism and creative content. If it isn’t sold in a brand-safe environment, the entire system will break down. That’s why it’s so fundamentally important that all companies in the digital advertising ecosystem work together to build better, more transparent, more high-quality marketplaces.