The next few months will see a fundamental shift in how we measure advertising success. Not because the metrics are changing—revenue is still revenue, after all—but because the way we get there is becoming less transparent and more automated than ever before.

Google's just released their measurement playbook for the AI era, and it's worth paying attention to. Because here's the thing: as Performance Max, automated bidding, and AI-powered campaigns take over more of the decision-making, the old ways of tracking what's working don't quite cut it anymore.

Let me walk you through what's changed, what it means for your business, and why this actually matters more than it sounds.

The Measurement Problem Nobody's Talking About

Here's what's happened over the past couple of years: Google's pushed hard toward automation. Performance Max campaigns, Smart Bidding, Demand Gen—all of it designed to take decisions out of your hands and let the algorithm optimise toward your goals.

And in many cases, it works. You set a target return on ad spend, feed the system some creative assets and product data, and off it goes.

But there's a catch.

When the system makes thousands of micro-decisions per day—which products to show, which audiences to target, which placements to use—you lose visibility into why something's working. You see the overall numbers, but the detail vanishes into the black box.

This is where Google's new measurement guidance comes in. They're essentially saying: if you're going to let AI run your campaigns, you need to completely rethink how you measure success.

What Google's Actually Recommending

The core message in their new guide is this: stop obsessing over last-click attribution and start thinking about incrementality.

Translation? Stop asking "which ad got the final click before purchase?" and start asking "would this person have bought from me anyway, even without seeing my ad?"

It's a smarter question, frankly. Because if someone searches for your brand name and clicks your ad, then buys—was that ad really responsible for the sale? Or were they coming to you regardless?

Google's pushing three big measurement approaches:

First: Conversion Lift Studies. These show you whether your ads are actually creating new sales or just capturing existing demand. You run your campaigns to one group of people and hide them from another group, then compare the results. It's proper test-and-control methodology, and it's the closest thing to proof you'll get that your ads are genuinely working.

Second: Marketing Mix Modeling (MMM). This is the old-school statistical approach that looks at all your marketing activity—paid search, paid social, email, everything—and works out which channels are actually driving growth. It's having a bit of a renaissance because it doesn't rely on cookies or tracking pixels, which are increasingly unreliable.

Third: Better conversion tracking setup. Enhanced conversions, server-side tracking, first-party data—all the technical infrastructure that ensures you're capturing as much signal as possible before third-party cookies disappear completely.

The thread running through all of this? You need multiple measurement methods because no single one tells the whole story anymore.

Why This Matters More for E-commerce

If you're running an online store, this shift hits you harder than most businesses.

You're likely running Performance Max campaigns that span Shopping, Search, YouTube, Display, Discover, and Gmail all at once. You're feeding in your entire product catalogue and letting Google decide what to show, to whom, and where.

And that's fine—Performance Max can work brilliantly—but you need to know whether it's genuinely driving new customers or just hoovering up people who were going to buy anyway.

Here's a practical example: your Performance Max campaign might be showing strong ROAS. But if you dig into the data (when Google actually lets you see it), you might find a huge chunk of that spend is going to brand Search terms—people who already know your business and are actively looking for you.

That's not necessarily bad. Protecting your brand terms matters. But it's not the same as finding new customers, is it?

This is why incrementality matters. It's the difference between growth and just capturing existing demand more expensively.

The Local Inventory Plot Twist

While we're talking about measurement, there's a quieter announcement worth noting: in-store products can now show up in Demand Gen campaigns.

If you're running physical shops alongside your online store—or selling through retail partners—this is actually quite useful. Demand Gen campaigns run across YouTube, Discover, and Gmail, focused on visual, scroll-stopping creative. Now you can promote products that people can pick up locally, not just order online.

The measurement angle here is interesting. You'll need to connect online ad exposure to offline purchases, which means either:

  • Setting up store visit tracking (if you have physical locations)
  • Using sales data from retail partners (if you sell through other shops)
  • Or accepting that some of your advertising impact won't show up in your Google Ads dashboard at all

That last point is crucial. Not everything that matters can be measured directly in the platform. Sometimes you just see an uplift in overall sales and have to work backward to figure out what caused it.

This is exactly why Google's pushing Marketing Mix Modeling—it's one of the few ways to capture that fuller picture.

The 80/20 Rule Still Applies (Even to AI Campaigns)

There's been some excellent analysis doing the rounds about wastage in Google Ads accounts. The core finding: roughly 20% of your ad spend typically delivers little to no return, and identifying that bottom tier should be your first priority.

Here's what's changed: in the old days of manual campaign management, finding wastage was straightforward. You'd look at keyword performance, cut the losers, and shift budget to winners.

Now, with automated campaigns, it's trickier. You can't just pause individual keywords in Performance Max. You need to look at wastage differently:

  • Which products are burning budget without converting?
  • Which asset groups are underperforming?
  • Which audience signals are you feeding the system that might be leading it astray?

The measurement challenge is that you need granular data to spot this wastage, but AI campaigns specifically hide granularity to "protect the algorithm." It's a fundamental tension.

This is why regular account audits matter more now, not less. You need to pull every scrap of data Google will give you—product-level performance, asset-level reporting, search term insights when they appear—and piece together where the waste is hiding.

What You Should Actually Do

Right, so what does all this mean in practical terms?

If you're serious about measuring results properly:

Make sure your conversion tracking is rock solid. Enhanced conversions, proper event tracking, first-party data capture—this is foundational. If the data going into Google's system is messy, everything downstream is worthless.

Don't rely on platform reporting alone. Compare what Google Ads says to what your actual sales data shows. Look at new customer acquisition rates, not just overall revenue. Track customer lifetime value, not just first purchase.

Consider running a conversion lift study if you're spending meaningful budget (typically £5k+ per month). It's the clearest way to know if your ads are genuinely working or just claiming credit for sales that would've happened anyway.

And if you're running Performance Max or other automated campaigns: push Google for every bit of reporting detail they'll give you. Product performance data, placement reports, search term insights—it's all there if you know where to look and ask the right questions.

The Bigger Picture

Here's the truth of it: measurement has always been messy in marketing. We've just gotten used to the illusion of precision that digital advertising promised.

Now, as AI takes over more decisions and tracking becomes harder, we're heading back toward a world where you need multiple imperfect signals to triangulate the truth.

That's not necessarily worse. It's just different. And it requires a more sophisticated approach than "check the dashboard and see if ROAS is green."

The businesses that'll win in this new environment are the ones that invest in proper measurement infrastructure now—before third-party cookies disappear completely and before AI campaigns become even more opaque.

Because the robots might be taking over the optimisation, but you still need to know if they're actually working.


If Performance Max campaigns are making it difficult to see where your spend is actually going, I can help — take a look at my Performance Max optimisation service or get in touch.