There's a question every advertiser should be asking that almost no one does, and Google has no incentive to answer.

If you turned the ad off, would the sale have happened anyway?

The answer matters more than ROAS, more than conversion rate, more than CPA. It's the difference between paying for traffic that genuinely drove revenue and paying for traffic that was on its way to you regardless. Most accounts have some of both. Almost no one knows the ratio.

What ROAS doesn't tell you

A ROAS of 5 looks healthy. Five pounds of revenue for every pound of ad spend. The trouble is that the number doesn't say anything about causation. Google records a click, the customer buys, the spreadsheet treats the sale as caused by the ad. But what if half of those customers were going to type your brand name into Google later that week and buy anyway? What if a third of them were already in your remarketing pool because they'd been on the site three times that month?

The number is still 5. The truth underneath it might be that you paid for £5,000 of revenue and £2,500 of it was already coming. Your real return on the ad spend is half what your reports say.

The platforms have no reason to make this easy to see. Their incentive is to take credit for as much revenue as possible, because credited revenue is what justifies tomorrow's budget. Their attribution models, even the better ones, are built to assign credit, not to ask whether credit is deserved at all.

Incrementality testing asks the harder question. It strips away the "what would have happened anyway" baseline so you can see what your spend is actually buying.

Brand search, the obvious case

The clearest example most accounts will recognise is brand search.

When a customer searches your brand name and clicks your paid ad, Google records a paid conversion. The campaign reports a high ROAS, often the highest of any campaign in the account. The numbers look great. The campaign survives every budget review.

The question incrementality forces you to ask is: if your paid ad weren't there, would that customer have clicked the organic listing immediately below? In most cases, yes. Brand searches are by definition coming from people who already know who you are. They're trying to navigate to your site. The paid ad is often standing in front of a free organic click.

Some of that paid spend is genuinely defensive. Competitors bid on your brand terms, and conceding the top spot can lose customers who didn't know any better. But "some" is doing a lot of work in that sentence. The actual share, in our experience across accounts, is usually a fraction of what the paid brand spend looks like in the report.

The only way to know is to test.

Auto-bidding makes it worse

The dynamic gets uglier under automated bidding.

Smart Bidding strategies, target ROAS, target CPA, maximise conversions, optimise toward whatever conversion signal you give them. The model learns which clicks tend to convert, and bids more on those. If a chunk of your "converting" clicks are actually customers who were going to convert anyway, the algorithm doubles down. It pays more for them. It treats their conversion rate as a signal of high intent and pushes the bids up across similar audiences.

You end up with a self-reinforcing loop. The algorithm bids harder on the most attributable customers, who are also the customers most likely to buy without an ad. Spend goes up. Reported ROAS stays high. Real incremental return goes down.

This is how we've seen accounts spend two or three times more on brand and remarketing audiences than they need to, with the platform reporting healthy numbers all the way through. The platform is doing what it was asked to do. It's optimising for credited conversions. It can't tell you that the credit doesn't reflect causation.

How an incrementality test actually works

The structure is simpler than people expect. The discipline of running it cleanly is the hard part.

You take a slice of your spend, brand search is the most common starting point, and you turn it off for a defined period. Two weeks, four weeks, a full month. You leave the rest of the account untouched. You measure what happens to total revenue, not paid revenue. The platform's reported numbers will go down, of course they will, the ad isn't running. The question is what happens at the business level.

If revenue holds, the spend wasn't incremental. The customers were finding you anyway. The paid traffic was buying credit for sales that organic, direct, and email were already going to deliver.

If revenue drops by something close to what the paid campaign was reporting, you've found genuine incremental value, and you should keep spending.

The interesting cases are in the middle. Revenue drops, but by half what the paid reporting claimed. That tells you the campaign is doing something, just less than the platform is willing to admit. You can then keep the campaign running but at a smaller budget, or restructure it to defend only against actual competitor bids rather than blanket-bidding on every brand query.

Geographic holdouts are the cleaner version of the same idea. You run the campaign nationally except for a control region, then compare growth between the test market and the holdout market. The maths is more rigorous and the conclusions are more defensible to a sceptical CFO. The cost is operational complexity. Most accounts don't need geo holdouts to get useful answers. A simple on-off test on a defined slice is enough to expose the obvious overpayments.

The cost: temporary uncertainty

Running these tests has a real cost, and we want to be honest about it before we recommend them to clients.

For the test period, you lose some efficiency. You're either spending in places the algorithm's been told are slightly worse, or you're temporarily not spending at all on what looks like a healthy campaign. The reported numbers get noisier. You won't know until the test concludes whether you've sacrificed real revenue or just paper revenue.

There's also a psychological cost. Watching reported brand ROAS go from 15 to zero, even when you know the test is running, feels uncomfortable. Stakeholders ask questions. The temptation to flip it back on after a week and "check it's still working" is strong, and doing that ruins the test.

The reason we run tests anyway is that the alternative is much worse. The alternative is paying the same overspend every month for years. The cost of a clean test is one bumpy month. The cost of not testing is permanent.

What you actually get back

Three things, in order of value.

Confidence in the spend. The campaigns you keep running after a test are ones you know are doing real work. You can scale them with conviction. You can defend the budget in front of a finance director without flinching.

Better budget allocation. Money freed up from non-incremental campaigns goes somewhere it earns. We've seen tests free up enough budget from defensive brand bidding to fund an entirely new prospecting campaign that the client previously couldn't justify.

A different relationship with the platform's reported numbers. Once you've watched a campaign's reported ROAS go from 15 to "missing" with almost no impact on real revenue, you stop trusting any single platform's attribution at face value. You start asking the right question first: is this incremental, or is the platform taking credit for something that was always coming.

That last shift is the most valuable thing testing buys you, and it carries forward into every future decision you make about ad spend.

The takeaway

Most agencies don't run incrementality tests because they're inconvenient and they sometimes embarrass the agency's own reporting. We run them because the alternative is to carry on charging clients to manage spend whose real value nobody actually knows.

If you're not sure whether the campaigns in your account are buying you incremental revenue or just paying for sales that were always coming, the answer is in the test. It will take a month, it will be uncomfortable, and the result will tell you more about your account than any dashboard ever will.

Start with brand search. That's where the question hides in plain sight, and it's where most accounts will find their first big finding.