Measuring Marketing Performance
If you can’t measure attribution, how do you allocate marketing budget?
Marketing is in a transition phase right now. In the past, marketing teams could collect a user's identity and use that information to understand site behaviour. However, stringent consumer privacy legislation is reducing advertisers’ power to target customers and measure attribution.
If you can’t measure attribution, how do you allocate marketing budget?
Through investments in many of the prominent consumer startups in ANZ (Linktree, Eucalyptus, Hnry, Who Gives A Crap) I’ve had a front-row seat to how startups are adjusting strategy, but haven’t seen much written externally about it, so thought I’d pull together what I’ve learned from data-driven nimble startups as well as marketing teams at larger, established consumer brands.
For other marketing-related content, you can read about influencer marketing, community-building, and mobilising your followers.
Attribution is Dead
The 2010s were the heyday of the Direct-to-Consumer brand. Founders could envision a delightful product and use the ad targeting capabilities of Facebook and Google to reach their ideal customer cheaply through performance marketing.
The success of digital ad targeting was dependent on tracking users across the internet using cookies and pixels.
However, in 2021, Apple announced a privacy update called App Tracking Transparency (ATT), giving consumers the right to ask an app not to track them. 80% of consumers exercised that right.
Consequently, ad targeting became more difficult, increasing Customer Acquisition Cost (CAC) for brands. Facebook experienced a significant drop in revenue, losing over $10bn revenue in 2022 as the result of decreased demand for advertising.
The inability to track users made it increasingly difficult to determine the effectiveness of advertising spend. Brands couldn’t track the user journey from first engagement to purchasing their product, precluding multi-touch attribution efforts.
The difficulty in using third-party data will compound if Google proceeds with its plan to block third-party cookies on Chrome (initially discussed in 2020, with recent reports suggesting a roll out to 1% of users in early 2024).
So how do you measure marketing ROI in this new world?
While there is no silver bullet to measuring ROI (Return-on-Investment) in this new world, after spending hours talking to some of the best marketers in the country1, I’ve pulled together two high-level and complementary strategies that came up in conversation.
Marketing Mix Modelling (MMM)
Marketing Mix Modelling is a holistic way to understand the ROI of different marketing expenditure, promotions, and pricing initiatives on sales. Rather than relying on individual user data and making bottom-up ROI calculations, MMM takes a top-down approach. It aggregates years of historical pricing, marketing spending, and revenue data, as well as external influences like the economy, weather, and seasonality, and uses econometric models to predict the ROI of each channel on overall business outcomes.
MMM has its roots in the 1960s. In its early application, MMM focused on measuring the impact of traditional marketing channels like TV advertising and promotions for consumer packaged goods brands using linear regression. The technique works by associating spikes and dips in sales with events and actions in marketing. Over time, MMM has evolved to incorporate a wider range of data sources, automated data pipelines, and more sophisticated statistical techniques. The release of Meta’s Robyn open-source library, the publication of Google’s Bayesian MMM papers, and the launch of software tools for automated MMM tracking have modernised the industry in the last few years.
By combining media and external variables, as well as measuring effects like adstock (the duration of an ad’s impact after it’s shown to a customer) and saturation curves (the spending limit before conversion maxes out), MMM provides a good top-down analysis of each channel’s contribution to ROI to help make relative value judgments between channels and inform forecasting and budgeting for different potential scenarios.
This feature of MMM has made it the default choice for large enterprises dealing with a complex marketing mix and needing a single source of truth for allocating a fixed marketing budget between competing channels. MMM can act as the tie-breaker between all departments, independent from the biases of individual channel managers or operational teams.
Traditionally MMM is only accessible to large enterprises spending >$10m annually on marketing because each model has to be built bespoke to the customer. Newer platforms are building generalised models that are predictive for all customers. If these are successful, the customisation per customer will fall, reducing costs and allowing platforms to reduce prices and target smaller customers. As these platforms automate data ingestion and cleaning, models can update more frequently, allowing monthly or even real-time decision-making.
However, MMM is not without its limitations. Like all predictive models, it’s based on historical data, so it cannot predict future outcomes with certainty. Additionally, its effectiveness relies on the quality of data input, so it’s important to know what business outcomes you want to impact with marketing spend and have high-quality data over a long time scale. Gathering this data can be challenging, particularly for organisations that haven't diligently maintained a centralised data warehouse.
For teams who have relied on multi-touch or last-touch attribution in the past, or have not measured marketing effectiveness at all, MMM is table stakes. It’s necessary, but not sufficient.
Lift Tests
While MMM is top-down and looks at past data to inform future budget allocation, lift tests are bottom-up, and are the default choice for the most sophisticated marketing teams because they’re the only way to prove one thing causes another.
Lift tests consist of two groups: test and control. The test group is exposed to ads, while the control group is kept aside for analysis.
By measuring the results from each group, you can determine which conversions would not have happened without advertising. This is known as “incremental lift”.
A simple example would be buying billboards in one city and buying none in another similar city over a defined period while adjusting nothing else. By comparing the impact on sales from those who viewed a billboard vs those who didn’t, you can see the incremental ROI driven by the billboards.
The control group is important because it helps identify whether other factors, such as competitor behaviour or seasonality, influence changes in sales. It can be difficult to ensure the control group doesn’t see the ad for certain channels like Above The Line (Radio, TV) so marketers will often run ads in different geographic areas to try and compare two cities of similar demographics when one runs the ad and the other doesn’t.
Sophisticated brands run dozens of incremental lift tests, testing a channel across different cities or different creative assets in a particular channel.
In addition to measuring ad effectiveness, some people use lift tests to calibrate their ad platform reporting and their MMM models. Lift tests can de-bias these reports by either confirming or discrediting the results. Say, for example, you’re trying to decide whether you should spend more on Google search. Google’s attribution claims that their ads are responsible for 2,000 of your conversions, but you’re not convinced.
You can run a short test on Google search ads and compare your results to Google’s platform-attributed results. If the results align, you’ve confirmed that Google’s attribution is accurate. If, however, Google search ads drove fewer than 2,000 incremental conversions, Google's platform reporting may be inaccurate. This approach gives marketers more accurate data for decision-making.
While incremental lift tests are the gold standard, they are complex to set up, requiring at least one dedicated data scientist internally or an external consultant to manage the experiment process.
Pulling it All Together
To get as close to the truth for ROI as possible and budget effectively, marketers will need to combine various methods.
MMM should be used by large marketing teams who need to understand the effectiveness of spend across channels to enable them to allocate a fixed budget across teams. It is a baseline and should be used to inform experimentation with lift tests.
Incremental lift tests are the best way to allocate budget within channels and understand the impact of incremental budget allocation on customer demand and ROI. They are best suited to teams that can make fast decisions and scale up or down marketing budgets as they see opportunities.
While these two methods are some of the best strategies to understand marketing effectiveness in a post-ATT world, they are by no means the only. Brands should also invest in brand tracking to understand brand awareness in your target audience, customer surveys for specific feedback, and server-side tracking to collect first-party data.
If you’re building tools to help marketers transition to a privacy-first landscape, please get in touch.
Thanks to Tim Doyle and Matt Rossi at Eucalyptus and Sam Redfern at Canva for their patience in educating me on this topic.
Image Credits to AppsFlyer
+ 1st party closed loop advertising, it's why Retail Media is the fastest growing digital ad market ever.