With everything else going on in 2026, at least we don’t have Google’s “Privacy Sandbox” to worry about. The high-profile project is over. Or is it?
Not quite. Now, instead of one browser introducing its own ad features while the others sit back, browser vendors are collaborating to move ads under browser control piece by piece.
A proposed standard now under discussion at W3C, dubbed simply “Attribution” (formerly Privacy-Preserving Attribution), aims to redefine how ad effectiveness is measured across the web. This proposal would centralize measurement of ad effectiveness under the control of major platform players, including Google, Apple and Meta, which is involved despite not operating its own browser.
While “Attribution” is framed as a privacy win, in practice it risks moving the industry backward. The proposed system adds obfuscation and creates more opportunities for claiming attribution when it isn’t earned.
More attribution shenanigans means more incentives for platforms and fraud rings to front-run conversions by collecting data in more intrusive ways, ultimately increasing privacy risks.
It also runs head first into the ad industry’s growing understanding of the “halo effect,” where ads perform better in trusted environments.
When platforms control attribution credit, what would limit their tendency to shift more advertising onto low-trust content and AI-generated slop?
Where the model starts to fall apart
The W3C proposal may be unworkable for plenty of practical reasons. For instance, a feature called “privacy budget” would mean that an error in generating one report could make a future, corrected report about the same data permanently impossible.
But even if practical issues are solved, the biggest potential hurdle is the eventual clash with advertising’s halo effect.
Research consistently shows that ads perform better in premium environments. Comscore, Moat and Newsworks have all documented measurable lifts in attention and business outcomes, with Newsworks reporting a 22% uplift for ads on news sites.
Meanwhile, The Trade Desk, which, as a buyer, has little incentive to inflate publisher value, found that premium placements drive 40% higher purchase intent.
There is also a documented reverse halo effect, where ads placed next to harmful or extremist content actively degrade brand perception.
The measurement problem no platform wants to acknowledge
For advertisers, measuring the halo effect is critical for media decisions, and the Coalition for Innovative Media Measurement recently proposed a collaborative set of standards for measuring media quality.
If the “Attribution” system were accurate, it would surface this halo effect. But the platforms shaping this system are economically invested in the opposite outcome.
Whether it’s viral low-quality content like Shrimp Jesus circulating on social platforms, the disinformation economy enabled by open web programmatic advertising or the flood of algorithmically amplified AI-generated videos on YouTube and Instagram, the platforms do best choosing volume and scale over trust and quality.
Imagine Google, Meta or Apple decision-makers seeing the halo effect come out in the “Attribution” results and deciding that the decades they spent trying to get the best-paying ads into the cheapest possible context were wasted.
“Attribution” will be quietly tweaked, just as the famous “Privacy Sandbox” proposals were. We’ve seen this pattern before. Earlier Privacy Sandbox proposals promised sweeping changes, like moving ad auctions into the browser itself (via initiatives like TURTLEDOVE). In practice, those ideas were scaled back into more limited implementations like FLEDGE, which became the Protected Audience API.
Similarly, the original FLoC proposal fell short of its promise and was replaced by the Topics API, which critics argue advantaged large platforms like YouTube over independent publishers.
The “Attribution” system, if adopted into browsers, will be corrected until the inconvenient halo effect disappears. It will have to come up with the “right” answers one way or another.
The individual developers working on the proposal at W3C have good intentions, but will have some tough choices to make, as their employers’ expected results become clear.
But there is still time to fix the situation. Nobody has deployed “Attribution” or pronounced it the canonical source of ad effectiveness measurement yet.
The solution can start with W3C being honest about what it’s not good at. W3C’s limited focus on antitrust has made it a convenient venue for Google’s “Privacy Sandbox” and now for the attribution cartel.
Even if members accept that antitrust just isn’t W3C’s bag, the “Attribution” proposal is a bad fit for the values W3C cares about, too. The sustainability impacts of processing vast quantities of noise data should raise concerns, just as proof-of-work blockchain systems did.
The reporting centralization that “Attribution” requires should likewise raise societal impact questions. And IT decision-makers worldwide, members or not, have concerns about overreliance on US-based vendors.
Attribution reporting is already being addressed at IAB Tech Lab – a forum that not only has a modern antitrust policy but includes other stakeholders besides just Big Tech-controlled browsers under pressure to produce the “right” answer.
Continuing to host the attribution cartel, with the inevitable regulator attention that it will draw, will only distract W3C from the tasks it is good at and delay the progress of legitimate attribution tracking.
What the industry needs now are measurement solutions built in the interests of advertisers, publishers and users – not another system designed to consolidate platform power.
“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.
Follow Don Marti and AdExchanger on LinkedIn.
For more articles featuring Don Marti, click here.
