Andrew Lolk at Savvy Revenue wrote a sharp piece recently on how advertisers quietly sabotage Smart Bidding with well-intentioned constraints. We agreed with the points he raised.
But there's a layer above what he covered. The mistakes that survive past the beginner phase. The ones we find in accounts where the advertiser already knows not to fiddle with tROAS every Monday, already knows budget caps strangle the algo, already feels comfortable handing over control.
These are the mistakes that hide. The account looks healthy. ROAS is in target. Conversions are flowing. But the algo is making decisions on broken signal, and you're leaving 10 to 30% of profit on the table without realizing.
Here are six of them.
1. Feeding the algorithm polluted conversion data
Smart Bidding optimizes toward whatever you tell it counts as a conversion. The problem is, most accounts have conversion definitions that haven't been audited in years.
What you'll find when you actually look:
- View-through conversions counted alongside click-throughs, inflating signal.
- Newsletter signups, PDF downloads, and engagement actions (often auto-suggested by Google's self-hosted gtag) all counted as primary conversions alongside purchases.
- GA4 conversions imported into Google Ads with a different attribution model than the platform's native conversions, double-counting if running alongside the native Google Ads Tag.
- Cross-domain tracking gaps that leak attribution to direct
- Server-side conversion API duplicating client-side conversions when consent state changes mid-session
Each of these is a small distortion. Stacked together, the algo is optimizing toward a phantom version of your business.
The fix takes an afternoon. Open the Conversions section in Google Ads. For each conversion action, check what's actually firing, what the count source is, and whether it's set to Primary or Secondary. Move anything that isn't a real purchase or qualified lead to Secondary, where it gets reported but isn't used for bidding.
Then run a comparison covering at least 14 days, lagged to match your account's actual attribution timeline. Some accounts close most conversions within 3 days. Others need a much longer lag before the comparison is meaningful. Compare Primary conversions in Google Ads versus actual orders in the back-end system over the lagged window. If the numbers don't match within 5%, you've got a signal problem.
2. Optimizing for revenue when margin is what matters
tROAS on top-line revenue is the default, and for accounts with a roughly uniform product margin, it works fine. For everyone else, it quietly destroys profit.
The dynamic: the algo learns to push your highest-revenue products. If those products happen to have thinner margins than others in your range (common when product mix spans multiple price points or supplier tiers), the algo quietly tilts spend toward lower-profit SKUs while reporting a target-meeting ROAS. It isn't always the case that high-revenue means low-margin. But until you check, you don't know what mix the algo is actually optimizing into.
The fix is profit-on-ad-spend (POAS) thinking. Two ways to operationalise it:
- Send margin-adjusted conversion values to Google Ads instead of revenue. If your gross margin on a £200 product is 30%, send £60 as the value, not £200. The algo now bids based on what you actually keep.
- If sending net values is too invasive, use Value Rules. These let you adjust conversion values up or down based on conditions you define, for example "reduce value by 40% when the product category equals X" or "increase value by 20% for new customers." Set up under Tools then Conversions then Value rules. Less precise than sending real margin data per transaction, but no engineering work needed and you can have it running in an hour.
For lead-gen accounts, the same principle applies in a different form. Sending conversion value based on lead score, instead of treating all leads as equal value, lets the algo bid more aggressively for the audience segments that actually close.
We've yet to audit an ecom account where this change didn't surface at least one product category that was eating margin while showing acceptable surface metrics.
3. Smart Bidding optimizing on incomplete conversion data
The previous mistake was about polluted signal: too much getting counted. This one is the opposite. Too little getting counted. Most accounts have meaningful conversion under-counting and don't know it.
Where conversions go missing:
- Consent gaps. Users who decline tracking get dropped entirely from the conversion stream. Even with Consent Mode v2, modelled conversions only partially recover the loss.
- Cross-device journeys. Click on mobile, convert on desktop logged out. Without enhanced conversions or stable user identity, the connection breaks.
- Server-side gaps. Client-side tags fail more than people think. Adblockers, browser tracking protection, page-load failures.
- Attribution windows shorter than the journey length (covered in #5 below).
- Conversions happening offline. Phone calls, in-store visits, B2B sales-team-closed deals.
The cleaner the picture you can build between Google Ads conversion data and your true back-end revenue, the better. But every obvious tool for that comparison is itself incomplete.
- Your back-end system (Shopify, CRM, ERP) doesn't natively tag orders as "Google Ads source" with any precision
- GA4 has its own gaps. Sampling on bigger accounts, default channel groupings that lump categories together, attribution drift across the reporting interface
- Manual UTM tagging breaks in a hundred small ways
This is the structural reason serious advertisers eventually build third-party attribution infrastructure. A first-party capture of the click ID, persisted server-side, reconciled against true back-end orders, gives a complete view that no amount of Smart Bidding tuning can produce on its own.
The fix isn't a single setting. It's a series of layered improvements:
- Enable Enhanced Conversions (free, takes an afternoon)
- Run Consent Mode v2 with modelled conversions
- Implement server-side tagging if you haven't already
- Audit your conversion windows against actual journey length (see #5)
- For accounts where the data still doesn't reconcile, dedicated attribution infrastructure is the next step
4. Brand contaminating nonbrand bidding
Even with clean conversion data, running brand and nonbrand campaigns through the same portfolio bid strategy creates a different problem.
Brand search typically converts at 15 to 30% with a ROAS of 20x or higher. Nonbrand converts at 2 to 5% with ROAS of 3 to 6x. These are economically incompatible. Asking a single Smart Bidding strategy to optimize both is asking the algo to find a target that makes sense for neither.
The portfolio's blended target sits above what nonbrand can realistically hit and below what brand could deliver. Brand under-bids, leaving impression share to the algo's most conservative read of value. Nonbrand over-bids on the edges, chasing conversions at uneconomic CPCs.
The fix is straightforward but routinely ignored: brand and nonbrand campaigns should never share a bid strategy. Build separate portfolios or campaign-level strategies, with targets calibrated to each side's economics. ROAS 20x for brand. ROAS 5x for nonbrand (or whatever your real numbers say). Let the algo do its job on each separately.
If you've got a portfolio strategy with both, look at impression share and CPC variance by campaign. If brand is leaving Top of Page impression share on the table while nonbrand is paying CPCs that don't make sense for the conversion rate, that's the symptom.
5. Conversion lag misalignment
Two timeframes often get used interchangeably but they aren't the same:
- Google's conversion window is the lookback period Google uses to attribute a conversion to a click. A setting in your account, typically 7 to 90 days.
- Your actual user journey length is the average time from first touch to conversion across your buyers. You read this off GA4 or whatever attribution data you have. Some accounts run at 3 days others can be closer to a month.
If your conversion window setting is shorter than your actual user journey length, Google never sees the connection between the early-funnel click and the eventual conversion. The click falls outside the lookback. Smart Bidding gets no conversion signal for that click and bids down on whatever drove it, even though it really did contribute to the eventual purchase.
This is endemic to lead-gen and considered-purchase categories. The algo sees a click today, sees no conversion within the window it's using for its decision, and concludes the click wasn't valuable. It bids down. Two weeks later the conversion lands, but by then the algo has already pulled spend from the keyword or audience that drove it.
Two fixes, depending on the platform:
- Set Conversion Windows in the Conversions UI to match your actual conversion lag. If conversions take 30 days, set a 30-day window. Smart Bidding will use the longer window and adjust for delayed reporting.
- Use Google's conversion adjustments via offline conversion imports. Upload offline conversions with the original click timestamp so the algo sees the connection retroactively. Especially valuable for B2B where the qualified-lead-to-sale conversion is offline.
Most accounts we audit have the default 30-day window enabled but don't actually have data flowing in beyond day 7 to 10 because the offline upload pipeline isn't built. The setting looks right. The actual signal is short.
6. Geo and device overlays still applied "just in case"
We see this constantly. Account uses Smart Bidding, ROAS-based, fine. Account also has -30% bid adjustment on mobile, -20% on tablet, +15% on London, -10% on Birmingham, all set up years ago and never reviewed.
These overlays defeat the entire point of Smart Bidding. The algorithm already knows that mobile users in Birmingham at 9pm on a Tuesday convert 22% better than the account average. It's making millions of micro-bid decisions per day based on signals you couldn't manually account for. Layering a flat -30% mobile adjustment on top of that means you're telling the algo "ignore your signal, use my outdated rule."
The fix is the simplest in this list. Audit your campaigns for active bid adjustments on Smart Bidding strategies. Zero them out, with two exceptions:
- Geographic targeting (not adjustment): if you genuinely don't want to advertise somewhere, exclude the location entirely rather than adjusting bids
- Brand campaigns: where you may want to floor bids regardless of algo signal for defensive reasons
Everything else should be set to 0%. Let the algo work the territory it knows.
Bonus mistake: trusting Google's "recommended targets" in the UI
Worth mentioning briefly. Google's in-platform recommendations for tROAS and tCPA are calibrated to maximize volume, not your profitability. The "set a target ROAS of 4x to capture more conversions" suggestion isn't wrong, exactly. It just optimizes for outcomes that align with Google's incentives (more spend, more conversions, more reported volume) rather than yours.
Set your targets based on your own data: customer lifetime value, profit margin, payback period. Not the helpful blue tooltip.
The common thread
The Savvy Revenue piece argues that advertisers sabotage Smart Bidding by over-constraining it. We agree. What we'd add: most advertisers who've moved past that stage are now sabotaging it by feeding it broken signal.
Smart Bidding is a function that maps your inputs (conversion data, value, attribution, signal) to your outputs (bids, placements, audiences). If your inputs are wrong, your outputs are wrong. No target adjustment fixes a polluted conversion stream, a brand-nonbrand bid strategy, or a 30-day conversion lag being optimized on a 7-day window.
The leverage isn't in tweaking the algorithm. It's in cleaning up what you're telling it.
TL;DR
- Audit your conversion definitions. Move soft conversions to Secondary.
- Optimize for margin or lead quality, not revenue. Send adjusted conversion values.
- Your conversion data is more incomplete than you think. Layer enhanced conversions, Consent Mode v2, server-side tagging. For accounts that still don't reconcile, build dedicated attribution infrastructure.
- Brand and nonbrand should never share a bid strategy.
- Match your Google conversion window to your actual user journey length. Upload offline conversions to bridge the gap if needed.
- Strip geographic and device bid adjustments from Smart Bidding campaigns.
- Ignore Google's recommended targets. Set your own based on your economics.
