JOE SULLIVAN: Advertisers used to just come to Facebook and say, "I want to target…" and they would identify some characteristics. When most of us think about Internet advertising, we think it's like that still. You show up and you say, "I want to target white guys in California who drive Porsches with this advertisement for cigars," and you get that audience, but most of the time now advertisers already have a list of customers and they want to engage those people specifically, or a subset of them. Our more advanced advertising tools allow them to do that.
With custom audiences, they can bring their customer list to Facebook, but they don't want to leave their customer list with Facebook and they don't want us or their competitors to benefit from it. Over the last couple of years, as these products have evolved, there's been a lot of scrutiny by us and by our partners on how we make this work securely and in a privacy-sensitive way.
How far have most advertisers progressed in maximizing value from their first-party data?
TIM CAMPOS: A lot of companies don't even know what they have. In large part, this data is stored and managed by a technology organization that doesn't necessarily have a business imperative. It's the marketing organizations that have more intimate use of this. They don't necessarily even know what their companies have. For many companies, the limitation is just lack of awareness of the opportunities.
Are the more advanced marketers willing to share data with Facebook?
TC: Companies that know what they have don't necessarily want to share it with us because they feel like if they were to take their data and pair it with Facebook, somehow that would lead to a compromise of the data.
Why are they worried?
JS: If you think about the organizations inside a company, you'll have your marketing organization, you'll have your IT organization, you'll have a big store of customer data that you've built up over the years that certain organizations inside that company have access to. Often the marketing team never had access because it was never within their purview.
Now, all of a sudden, these marketing teams are trying to advertise online, and realizing the value in the data that they already have at their company. They've still got to get through the hurdles of their own security team and their own legal team, who say, "Wait a minute, why does marketing need all this data?"
Those companies are understanding the scope of the data they have, trying to figure out how to manage that data, get access to it. Once they have figured out those things, they have to start working with other advertising platforms.
That's when we get involved, because if they've managed to get through all those issues internally around legal and security and consumer expectations, then you take those issues and double them when you start talking about interacting with third parties.
A lot of companies have been moving very slowly for good reason.
Have you submitted to security audits?
To get advertisers more comfortable with the way we do custom audiences, for example, we worked with a major outside third-party auditor [PricewaterhouseCoopers] to do a white paper. That was incredibly valuable because people just want to see a third party as validated.
We did something called a SOC 2, which is a full security audit of our ad environment, and we now turn that over to any advertiser that's interested in talking to us about doing business.
Once a marketer commits to sharing data with you, what are the options for activating it?
JS: We see three different models. One is you'll use a third party as an intermediary. Both sides will submit the data to a Datalogix or [similar data platform] under a contract. A second version would be you tighten up contracts and allow the data to go from one side to the other but with a lot of auditing and oversight and a controlled environment. The third one would be a "dumb terminal" model where we basically just take a device that's not connected to the Internet at all and put two sets of data in it, run it, take the result, and then everybody takes their data back.
There must be some security risk, however small, associated with the hashing process as it's done today. Speaking as a security expert, what do you think could go wrong?
JS: There are definitely ways that you can do hashing that are reversible and ways that are not. Of course, if the other side is using the same algorithm with the same starting place, they're going to get to the same outcome. If I give you the string of numbers that have been hashed, it's very hard, if not impossible, to get it back to the email address. Unless you've got a massive set of computing power, it's just not going to happen.
Also, remember it is only an email address. It's not an email address and password, or an email address, password and credit card [number].
TC: It's kind of like saying: Is it possible to get Ebola from having this conversation because you're a person and you could have Ebola? The reality is, practically speaking, that's not a real risk. The value far outweighs the risk that we're going to die.
Practically speaking, this is a very safe mechanism. And Facebook has no incentive to exploit our advertisers. We are going to be in business with them for a long time. They need to feel that they can trust us. Even if it wasn't a malicious misuse of their data, but an inadvertent mishandling of their data, that is very bad for us. We have a lot of incentive to make sure that that just isn't going to happen.
But it seems like the balance of trust here is a little asymmetrical. As I understand it, the new Atlas doesn't allow marketers to export their cross-device campaign data to outside platforms. It has to stay within Facebook's walled garden. And you're asking marketers to upload their data sets and trust you. Shouldn't the data flow the other way too?
JS: I gave the Datalogix example before, where we were able to allow Facebook data [outside our walls] and then audit it in a third-party context. We expect other third parties to be just as careful with Facebook data as they expect us to be. We audit their security teams, practices and technology. Sometimes people have unrealistic expectations. "If we leave it on a USB stick in this room, it's definitely going to be safe." No, it's not. If we put it in a safe in this room, and only two of us have the password, then maybe it'll be safe.
We look at the quality of the security team of the other company, whether the company gives the security team deference. Everything matters. If we can come up with a reasonable approach like that, we will do third-party deals. In the context of Atlas, we have made commitments to ensure that the data residing in Atlas doesn't find its way over to Facebook.
One important thing to remember here is that our primary relationship is with the people who use Facebook and who've trusted us with their real identity. We never want to do anything to undermine that trust and their expectations. Everything we do in the advertising context has to be around creating a better experience for them. We're always going to walk into advertising opportunities, rather than run. We're going to try and build controls for people and make sure that they work well. It’s a step-by-step process of introducing people to Facebook as a source of advertising and making sure that they're comfortable with it.
You have plenty on your mind that has nothing to do with advertising, like government surveillance requests. Where does marketing tech fit into your priorities?
JS: It's a real, important part of our priorities. As a security team, we have spent more and more time on advertising over the last couple of years as it’s grown in the company. We've always been highly resourced on security since Mark [Zuckerberg, CEO] and Sheryl [Sandberg, COO] have always considered security a top priority. We now have dedicated people working on ads.
How many people are dedicated to it?
JS: At any given time, we have one or two security engineers who are doing proactive code reviews, just looking at the code that's going out. We run a "bug bounty" program. Most people outside the security world don't really know what that is. We pay researchers to find vulnerabilities in Facebook, poking at our code. We've paid out over three million dollars in the last three years to researchers all over the world in increments of about a thousand dollars apiece.
And do you also hire hackers to attack your ad systems?
We're paying people to hack and we actually end up hiring some of them. This fall we announced that anyone who can find a code vulnerability in our ad systems will get double bonuses. The reason is, we think most of the hackers out there are not thinking about ad platforms and environments, but some are. We wanted to get the people who are researching to start thinking about that a little bit more, so we doubled our ad bug bounty bonuses.
That means that we get lots of reports – most of them are false positives, but it's just good to have lots of magnifying glasses looking at your code. Then we have a couple of investigators who are always looking at the trends in ad abuse. Is it diet pills this week, or is it stimulus plan packages? Whatever it is that scammers are going to try and get on our site, we have people who are always looking at the general spam and malware stuff.