"We have a chance to take it all back Investors were panicked by poor performance due to iOS14, and cut your budget. Now you've been given an opportunity to prove the ROI of ads.
At home – Late one night
You're not working late in the office like you used to, because there's not much to do. Ad budgets were cut when iOS broke your tracking. You get an email...
Geo Lift Test\nHey I just got off the phone with the founders: they're open to restarting ad spend! We just need to do it in a way where we can prove ROI to investors - they want to see a Geo Lift test (see article).\nLet's get this set up first thing tomorrow.\nTalk soon\n
iOS14 Attribution Woes When Apple rolled out iOS14, it allowed users to opt out of tracking. That hurt our ability to attribute performance by channel, with the users that came from ads but opted out of tracking getting assigned to 'organic'. Your investors panicked, which made the founders overreact and shut off all spend. As a promising young FinTech app that was already self-conscious about having to pay for growth (why don't we just build a better product?), this uncertainty could kill growth, and if ads don't come back on, you could be looking for another job.\nThere were already holes in last-click and multi-touch attribution (Adblockers, GDPR, incrementality) but this change shook the confidence of advertisers in the value of their ads like never before. So even if this startup doesn't sort things out, knowing how to prove the value of advertising will be an essential skill going forward for any analyst: you're highly motivated to find a solution.\nAs well as hurting the ability to attribute the performance of ads, the changes also forced ad platforms like Facebook to kill off other attribution methods like Lift Tests. This was the ability to actually measure the incremental ROI you get from running ads - a number of users you target with ads would have bought anyway, so this was an essential measurement tool.\nThere is a potential solution that can tell us the value of ads without tracking individual users: GeoLift testing. This method divides the country into regions (geos) in order to make a control / test split for the experiment, so doesn't require user level data. All you need to run a geo-experiment is the ability to only run activity in specific locations. Once you run the experiment, you'll be able to know with confidence, if your spend drove a statistically significant uplift in the areas you spent (relative to those you didn't). Eager to find ways to prove the value of ads, Facebook released their GeoLift package open-source, so all of the statistics are handled for you, we just need to run it in a code editor.
Last click and multi-touch attribution was already flawed, due to issues like GDPR (consumer privacy legislation), Adblockers (a third of all traffic) and Incrementality (giving credit for sales that would have happened anyway). Since iOS14 no marketer is still holding to the belief that one attribution method holds all the answers. However one method that seems robust to whatever happens with privacy is Geo Lift testing. While Geo Lift tests require slightly more work and require more technical skills, they offer proof of ROI to marketers looking to get back what they lost in terms of attribution.
One accurate measurement is worth a thousand expert opinions
Putting your experiment on the map This experiment has to be beyond reproach: so how do we set it up in the right way to reach statistical significance?
Bank of Yu Office – First thing in the morning
Your boss emailed you about an exciting new development - you're going to get to prove the worth of your ad campaigns with a Geo Lift test: we need to learn how to do it right.
GeoLift Testing With digital attribution off the table, advertisers have been looking for more aggregated statistical methods. No technique is fool-proof, but Geo Testing is robust against almost anything that could happen to advertising, given it only requires the ability to limit ad spend to specific locations.\nIt works like this: divide up your campaign into regions, do a power analysis to determine how many regions need to be 'treated' to get statistical significance, then run ads only in those treatment regions (or turn off ads in those regions). Using statistical techniques to control for seasonality, you can then estimate what the uplift was in the regions where you did run ads, relative to the regions where you didn't.
Geo-experiments are particularly suited for online advertising; they have strong statistical rigor and are easy to understand, design, and implement.
Running R Studio
This course requires you to execute code in R-Studio. If you haven't set that up before on your computer, download R and R-Studio via the links below.
GeoLift Package The GeoLift package is an open-source library for running Geo experiments, released by Facebook's marketing science team. It automates the statistics for you, and provides helper functions and plots for practitioners to use. It works in a coding language called R, which is primarily used for statistics. R-Studio is an IDE (Integrated Development Environment) that makes it easy to run R code.\nFor this course we've packaged up code for you to run to get the hang of how GeoLift works - you can run this line by line and execute your test, or modify it for your own purposes. In this chapter we'll run everything in sections 1 and 2, and get ready for the final test.
To run a line of code in R-Studio, just click and drag to select a line, then click run in the top corner of the script. This will run just the code you selected, and the output will show in the terminal below.
In R-Studio copy and paste the code from GitHub into a new script. Run line 5 `R.Version()` to see what version you have.
Pre-Test Data Before we run our Geo Experiment, we need to determine how many and which Geos we will need to include in our experiment. To do this we can use some of the helper functions provided by the GeoLift library. First take a look at the pre-intervention data (link below and in the R code) so you can see what's needed for this type of analysis. Follow along with the code running each section one at a time and stop when you get to section 3. Read the comments as you run each section so you understand what's going on. The goal of this exercise is to determine which locations we need to include in our test, and how long to run the test to see a statistically significant result.
What's the ideal number of Geos to include in the experiment? (section 2.5)
What locations did we choose?
Explain in your own words why the detectable effect size increases with investment.
Thanks for getting in early on this\nAs you suggested let's run an experiment in Chicago and Portland.\nIf the experiment needs $60k to maximize the chances of detecting an effect, I think we can swing that\nThanks for getting this done
This could be the proof that we need Time to run the experiment - if this gives us the result we need, we win all out budget back. Here goes nothing...
Bank of Yu Office – 15 Days Later
You've been running the experiment, spending on Facebook ads again but only in your two 'treatment' geos. Time to estimate the impact.
Interpreting GeoLift Results The GeoLift package does a lot of the heavy lifting for you, so all we have to do in this case is run the code and interpret the results. As part of the tutorial we already have the post experiment data to feed in, which is the same as the previous data but with additional rows for the test period (see file). For this exercise we'll run everything in section 3 (line 133 to the end), though the main thing we need is the GeoLift Inference section which tells us how many incremental sales we got for our test.
We spent $60,000 on the test, and the GeoLift Inference section has told us Incremental Y (i.e. the number of incremental conversions). What was our incremental CPA?
Explain what the ATT (Average Treatment effect on the Treated) is in your own terms.
Excellent work with this analysis\nI know a 12.8 CPA wasn't exactly what we hoped for, given Facebook used to report it at 7.5\nBut at least we've proven incrementality and this gets us back in business\nWe just need to optimize now to get it back to where it was\nThanks again, you might just have saved my job!
"