Did fake news have a material effect on the recent US election?
It’s the subject du jour, with recent discussion about the creation and distribution of fake news from industry leaders like Jeff Jarvis, Ev Williams, Fred Wilson to media companies such as Bloomberg, WSJ, NYTimes. There is no shortage of insightful opinions. All sharing the hope of bringing awareness, conversation and eventually a process to solve the fake news problem.
To me, asking whether companies such as Facebook, Twitter, or Taboola are media companies or technology companies is just a question of semantics. The debate shouldn’t be what industry you align with as a company. The real issue at stake is – are we accountable for what we distribute?
Let me divide the conversation into two separate parts. Both equally important, but very different.
Part One: Should intentional confusion (Fake News) be allowed?
Short answer: no.
Let’s define it. Fake news is the idea whereby a piece of content is written to intentionally deceive consumers to believe something that is not true—whether for commercial, political or other reasons. The intent is to deceive consumers into believing that they are reading fact-based news or editorial content, when in fact it is truthless propaganda or an advertisement.
Taking the example from Ev Williams’ post — consider a website pretending to be ESPN while it is actually a site called “espn.com-magazine.online” which was created to sell muscle building products by basking in the imputed credibility of ESPN. It is an advertisement disguised as editorial content and many people clicking on this website will believe they landed on ESPN. In turn, they may mistakenly believe the products sold are backed by a reputable source like ESPN, which may influence their decision to make a purchase.
It is an advertisement disguised as editorial content and many people clicking on this website will believe they landed on ESPN. In turn, they may mistakenly believe the products sold are backed by a reputable source like ESPN, which may influence their decision to make a purchase.
A second example of fake news is a website designed to look like a legit news source, providing faux veracity to products like diet pills. Their mission is to capitalize on a need people may have, like losing weight, by communicating that the advertised product is newsworthy. We’ve all seen the blatant use of editorial rhetoric: “Research surprise: This Nutrient Burns Fat While You Sleep.”
We’ve all seen the blatant use of editorial rhetoric: “Research surprise: This Nutrient Burns Fat While You Sleep.”
A third example is the range of fake news that is catalyzing the discussion around the 2016 election. In this case, a website is created to post content that is verifiably false, often with the intent of changing public opinion, or encouraging supporters of that point of view to share the “news.”
For example, a site published: “FBI Agent Suspected in Hillary Email Leaks Found Dead in Apartment in Murder-Suicide,” which went viral on Facebook. The claims in this article were quickly debunked, but not before it was read and amplified.
The list grows daily and companies that promote news content have a responsibility to do their best to eliminate all types of fake news. It is a disservice to publishers, advertisers, and consumers to look the other way. If you were operating a restaurant, you would never intentionally serve rancid food, saying “the supplier said it was safe.”
Restaurants have a responsibility to test each supplier to ensure food is safe. They have a responsibility to their customers. Technology and media companies need to treat fake news as if it were rancid food – it’s unsafe for everyone, especially those consuming it.
Any type of fake content can not and will not be tolerated on the Taboola network.
Let me explain why:
I started Taboola 9 years ago with the vision to connect people with information they might like and never knew existed. I believe we have a chance to improve people’s lives by making their world personalized.
It’s not an easy vision or mission and it is a multi decade effort in the making for us to achieve. While we are on now our way to serve 12 billion recommendations a day to a billion people a month—we realize the challenges and risks fake news represents.
We started to build and deploy disciplined processes to assure our mission would not be corrupted by tricksters, deceivers and manipulators who inevitably appear to game every new advertising and distribution channel that emerges (very much like we witnessed with Search, and Display advertising 20 years ago).
Here is what we did:
- We wrote very detailed, public, advertiser content guidelines outlining what is acceptable, and what is not (including, but not limited to, fake news). But because we know that guidelines can be violated – and we didn’t want to hide behind them – we went further.
- We made a decision that the content distributed on Taboola will be manually reviewed and categorized by a human being that is trained in our policies. Meaning that when advertisers upload content for us to distribute, someone must look at it and approve it first – it’s not an algorithm (not that we don’t love algorithms). We employ nearly 100 account managers whose core mission is to approve/reject content items.
- In addition, all of our employees are trained in our guidelines. Therefore we have hundreds of eyes around the world looking at our content, acting as a second line of defense.
- We have also implemented stringent disclosure guidelines to ensure that before someone clicks on a piece of content on our network, they can easily determine where the content is coming from and whether or not it is sponsored. So we require that the name of the company and often even their product that is being marketed be disclosed together with the link to the content.
- We also launched a network post approval safety process – a belt-and-suspenders approach. This is a team consisting of ten people who are tasked with sweeping our network on a daily basis to make sure nothing inadvertently gets through. The team is advised by and works side by side with Shelly Paioff, our VP of legal, who not only cares deeply about the topic but also regularly participates in industry events to improve our guidelines as the industry evolves.
- We also recognize our limits, and we asked a billion people to help us, launching a program called “Taboola Choice.” We encourage everyone who sees something that they don’t like or that may be deceptive, fake or harmful to tell us. Rest assured every piece of content flagged, is evaluated thoroughly according to our guidelines and we take action as appropriate.
- We also fight “cloakers” (i.e. advertisers who submit content that looks legitimate at first but is later replaced with fake content in certain geographies where we don’t have employees or it’s not easy for us to spot their toxic game). But we do find out about it eventually and we fix it ASAP, which is also why Taboola Choice is so important.
Let us be the first to admit—we make mistakes (though we try to only make a mistake one time and to correct it quickly).
Fake content is an industry wide challenge. All of the largest online advertising and distribution companies are wrestling to find the solution to this ever-evolving cat-and-mouse game. Like cyberattacks—you can be relentless, but eventually you will get hacked, the question is how you prepare and how you respond.
Part Two: Freedom of speech, censorship, and uncomfortable content
What about content that doesn’t fall into the “intentionally false” category, but is offensive to the point where it violates certain community standards or even fits within those standards but some still find difficult to tolerate.
At what point should companies like Facebook, Google or Taboola prevent a piece of content from being shared, searched or recommended? This is a tricky topic as it falls under the First Amendment, our fundamental right to freedom of speech and freedom of the press and the simple truth that sometimes what seems appropriate to some, seems very wrong to others. This can be a “grey” area and there is no easy answer.
I’m Jewish, and Israeli (most of my family is still in Israel), and before I started Taboola I spent seven years as an officer in the Israeli Army. As you might expect, and as I’m willing to share, I have my own opinions about matters related to my country, my religion, and the recent US election.
But our position as a company has taught us—and me—that it is important to accept and include all opinions on the web as part of the conversation. We have the responsibility to respect and allow them to be shared – as long as such opinions are not fake, deceptive and don’t violate the law. Even if those opinions are very different from my own or from anyones at Taboola.
The open web allows many opinions to exist, and we don’t have to like certain opinions more just because they are searchable, shareable or recommendable. But I respect and honor the First Amendment rights because they make us stronger even if—or especially because—we may disagree with content those rights protect.
I don’t think we have all the answers; I know we don’t. This is a big topic for every content platform today, and people we all look up to are writing about it, talking about it, and care because it’s essential to a healthy and open web.
To answer my first question—are we accountable for what we distribute? Absolutely. This post is written to assure all of you that we’re attentive to the debate, and have a strong point of view regarding our responsibilities. We anticipated this challenging fake news epidemic, and instituted our policies and practices long before the public debate erupted. It’s important that both “tech” and “media” companies come together to address these critical issues now.
If you have suggestions as to how we can improve, I’m eager to hear them. Please email us.
Thank you for reading this.
Founder & CEO Taboola