On February 16, Google is lifting its prohibition on device fingerprinting for companies that use its ad products.
It’s a surprising reversal. In 2019, Google called the method “opaque” and said it would “aggressively block” it to protect user privacy.
So what changed?
According to Google, it’s “refreshing” its platform policies in light of two shifts in the advertising ecosystem: the rise of connected TV and the rise of privacy-enhancing technologies (PETs).
IP addresses are already widely used for ad targeting and measurement on CTV, Google says, and innovations like PETs can help mitigate the related privacy risks.
A PET is any technical method for keeping personal data safe and secure. In the advertising context, that could mean using on-device processing to minimize data transfer or multi-party computation to allow multiple parties in a clean room to analyze personal data without sharing it.
Google has been integrating PETs into its ad products and says it plans to partner with the broader ad industry to “help make PETs more accessible.”
No one disputes that PETs can help protect and secure sensitive data, but are they enough to mitigate the privacy risks of device fingerprinting? We asked the experts:
- Arielle Garcia, chief operating officer, Check My Ads
- Julie Rooney, chief privacy officer, OpenX
- Mateusz Jedrocha, chief product officer, Adlook
- Daniel Rosenzweig, founder & principal attorney, DBR Data Privacy Solutions
- Cillian Kieran, CEO & founder, Ethyca
Arielle Garcia, chief operating officer, Check My Ads
Nope. Google references PETs and privacy-preserving technologies by construing the privacy risk of fingerprinting as seemingly limited to the reidentification of individuals by third parties.
What happened to Google saying fingerprinting “subverts user choice and is wrong,” since people can’t clear their fingerprints? The goalposts move to wherever Google needs them to be.
Julie Rooney, chief privacy officer, OpenX
The ad tech industry’s commitment to developing PETs that meaningfully enhance data privacy and security within the ecosystem is real. Major strides have been made in this space in recent years by many industry players.
That said, this change seems likely to give more personal data to more players or at least to lessen restrictions on its use. Even if that is done more safely via PETs, there are always increased risks when more personal data is changing hands or being used in new ways.
Mateusz Jedrocha, chief product officer, Adlook
PETs undoubtedly help ensure data is handled securely. However, the key issue with using IP addresses as identifiers is user autonomy. Can individuals opt out of tracking or delete the data collected about them?
The broader debate centers on user choice and control. This remains the primary challenge, which is separate from the technical safeguards offered by PETs.
Daniel Rosenzweig, founder & principal attorney, DBR Data Privacy Solutions
Generally speaking, yes, I believe PETs mitigate privacy risks and are a good idea to implement when feasible and warranted – although, I also think the devil is in the details.
That said, even when PETs are used, the core responsibility remains unchanged. If an organization processes data in a way that qualifies it as personal data – including data used for device fingerprinting – it should apply the same level of due diligence as for any personal data.
Cillian Kieran, CEO & founder, Ethyca
PETs are like armor on a battlefield. They protect, but they don’t eliminate the fight. And they’re not a silver bullet, particularly when applied to techniques like fingerprinting, which, by design, operate in the shadows of user awareness.
The truth is that fingerprinting is inherently invasive. It tracks people without their knowledge and creates data profiles they never agreed to. PETs can’t solve that fundamental problem because they don’t address consent or transparency.
What PETs can do is make fingerprinting less aggressive by adding a layer of obfuscation and reducing the data’s precision. If PETs are used as a genuine attempt to create a more privacy-conscious framework, they’re valuable. If they’re used to justify the status quo, they won’t fix the underlying trust issues.
The potential is there, but the execution matters.
Answers have been lightly edited and condensed.