Every SaaS tool has an AI button. Most of them are noise. Underneath the marketing, there's a handful of workflows where AI is doing real work at our agency, and a larger set of places where we deliberately keep it out. Here's the difference, and why.

Where AI earns its keep

The workflows we trust AI with share a pattern. The inputs are messy, the output needs to be consistent across volume, and a human can spot-check the result quickly. That's where the time saved is real, and the risk of getting things wrong is low enough to absorb.

Product feed enrichment

If you've ever looked at a raw Shopping feed from a store that hasn't had it optimised, you'll know the state it's usually in. Missing colours, missing materials, brand attributes that should be populated but aren't, gender and age group fields left blank. Fixing this by hand across a 2,000 SKU catalogue is a slog. It's one of the first tasks we handed to AI, and it's become one of the clearest wins.

The workflow is: for each product, pass the title and description to the model, ask it to extract or infer the missing attributes, and write them back into the feed. Colour is usually in the title ("Brass Pocket Compass", colour is brass). Material is in the description. Gender and age group are often obvious from the product context. What used to be a two-day job for a person is now a ten-minute run, and the consistency is better than a human would produce because the model doesn't get tired or lose concentration halfway through.

We still review the output. But the ratio of human time to finished output is completely different.

Search term classification at scale

Search term reports are one of the most information-dense outputs Google gives you, and one of the hardest to extract value from at scale. A couple of hundred queries is fine to read through. Five thousand queries across multiple campaigns is a different problem.

Our workflow for this is typical of how we use AI across the agency. A Google Ads script pulls the raw data out of the account and into a spreadsheet. AI then classifies each query by intent, transactional, research, brand, navigational, off-topic, and flags the ones that look like candidates for negating. The report rolls up into a view that tells you where the money is going by intent type, and the flagged list gives you a starting point for negative keyword work.

But the decision to negate a search term sits with a human. AI doesn't go into the Google Ads account and apply the change. It puts the building blocks in place. We look at the output, consider account context the model can't see, and action the decision ourselves.

This pattern, scripts pull data, AI analyses, human decides and actions, is how we use AI across most of the agency. Reporting, optimisation workflows, insights. The AI speeds up the cumbersome bits that used to eat whole afternoons. We keep the judgement and the execution in human hands.

Research and synthesis

Reading 400 customer reviews to pull out repeat themes, or digesting an hour-long competitor interview for the three ideas worth taking away, are the kind of synthesis tasks AI does well. Large input, small structured output. A human scans the summary in minutes and acts on it. The judgement stays with us.

Where we don't let AI run free

The other side of the coin. These are the places we've either stopped using AI or never let it into the final output in the first place.

AI making changes inside ad accounts

This is the line we don't cross. Our AI systems don't have write access to client Google Ads or Meta accounts, and they won't. Every optimisation, pause, bid change, or structural edit is made by a human who has looked at the data, understood the account context, and decided it's the right call.

Some agencies are now running AI agents directly inside client accounts, clicking around, pausing campaigns, adjusting bids, and making structural changes autonomously. A few use general-purpose coding agents for this. Others use bespoke tools built for the job. The pitch is that the AI is optimising 24/7, faster than any human could.

The problem is the same one that shows up everywhere else AI is let loose unsupervised. The model can't see the context that matters. A client about to launch a new product. A supplier change that moves margins. A competitor's price cut three days ago. An AI agent pausing a campaign overnight because short-term ROAS dipped doesn't know any of that, and by the time a bad change surfaces in the metrics, the damage is already done.

Meta's automated-behaviour policies increasingly flag this activity, and Google's are likely to follow the same direction. Whether or not it's explicitly banned, the underlying point holds: this isn't where the leverage is. The leverage is in freeing humans from the cumbersome work so they can spend more time on judgement. Autonomous AI in the account is the opposite of that. It removes the human from the decision entirely.

Strategic decisions with consequences

Which campaigns to scale. Which products to cut from the feed. When to pause brand PPC for an incrementality test. Whether a client's latest revenue dip is seasonal or structural.

We use AI to help summarise the data going into these decisions. We don't use it to make the decisions. The cost of getting one wrong is real money, and the judgement required draws on context that doesn't show up in the numbers the model can see. A good PPC strategist knows which client is preparing to launch a new product line, which supplier is about to change a key margin, which competitor just raised prices. None of that is in the feed.

Public-facing content

Everything we publish to an audience, articles, LinkedIn posts, client emails that matter, is written or finished by a human. AI is useful here as an ideation engine: twenty headline variations in a minute, rough angles to pick from, a first draft to react to. The finished piece is authored by a person. That's the whole rule.

Anything where a wrong answer isn't visible

The common thread across the "don't use" list is a simple test: if the model is wrong, would we notice?

In feed enrichment, a wrong colour shows up as an obvious mismatch on a product page. In search term classification, outliers wash out in aggregate. In insights work, a human is reading and editing before anything goes out.

In strategic decisions, account changes, and audience-facing content, a wrong answer can sit undetected for weeks before it surfaces in poor performance, a confused client, or a brand perception problem. Those are the places where we don't trust a model to fly solo.

The takeaway

The question isn't whether to use AI in marketing in 2026. It's where. The leverage is in workflow and insights: AI handles the data wrangling that used to eat whole days, and humans get more time on strategy and judgement. Using AI to replace the judgement itself is the mistake the "AI agents running loose in the account" model makes. It scales the wrong half of the work.

Scripts pull. AI analyses. Humans decide and execute. That's the line we draw.

If you're not sure where your current agency sits on that line, ask them a simple question this week: which parts of your ad accounts do their AI systems have write access to? The answer will tell you more about how your budget is being spent than any report will.