Changing your campaign without testing first is like rewiring your house with the power on. Split-testing (or Google Ads experiments) is how you avoid frying your results – and your budget.
Campaigns Learn Over Time. Experiments Let You Make Changes Without Resetting the Clock.
It’s a well-known fact that Google Ads campaigns learn over time. They get better at spending your money, if you let them. Change something – even a small tweak to targeting or bidding – and you might reset the clock.
That means your campaign has to start learning from scratch all over again, trying to regain lost ground while you’re wondering why conversions dropped off a cliff.
Google Experiments create a clone of your original campaign. You can tweak it however you like – new bidding model, different copy, alternative audience – without losing the performance your main campaign has built. If it tanks, no harm done. If it performs, you roll it out.
Good Things Come to Those Who Wait
Split-tests take time. And the early results don’t always tell the full story. Campaigns typically need a couple of weeks just to adjust and collect enough data to be meaningful. Pull the pin too early, and you’ll miss the bigger picture.
- The minimum is 30 days: This is the learning phase. Expect volatility.
- “Best practice” is 45 days: This gives you enough data outside of the learning phase to really measure performance.
- Listen to Google: The platform will tell you what your specific account needs. Ignore at your own expense.
Only Make One Change at a Time – Or You’ll Learn Nothing
This isn’t a slot machine. It’s a scientific method. You start with a hypothesis, e.g. “phrase match might convert better than broad”, then you test only that. Change five things, and if performance shifts, you won’t know why.
You Don’t Have To Gamble the Whole Budget
Any time you run a split test, you take a risk. Your experiment could improve results – or make everything worse.
If your current campaign is working well, you might not want to risk tanking 50% of its performance by giving an equal share to your campaign concoction. In that case, keep 75% of the budget on the safe bet, and run your test on the remaining 25%. If it flops, you’ve only spent a quarter of your budget proving what doesn’t work.
When to Stick With 25% – and When to Go 50/50
Let’s say you have a $1,000 budget, which usually gets you 1,000 clicks and 100 conversions – a cost-per-acquisition (CPA) of $100.
You want to improve that, so you run a test. But you don’t want to blow up your existing performance in the process.
To stay safe, you keep 90% ($900) on the original campaign and allocate 10% ($100) to your test. If the test fails, you’ve only risked $100 – not $500, which would be the case with a 50/50 split.
The results come in:
- Your original campaign delivers 9 conversions from $900. CPA protected at $100.
- Your test delivers 2 conversions from $100. CPA drops to $50. Looks like a win, right?
Not so fast.
That test only drove one extra conversion with its budget. At that scale, it’s statistically meaningless. You might be seeing a fluke, not a trend.
The smaller the test budget, the smaller the sample size. And the smaller the sample size, the harder it is to draw conclusions you can trust – let alone apply across your full campaign.
If you want confidence in your test results, sometimes you do need to run a true 50/50 split. Just make sure you’re willing to wear the downside if it doesn’t go to plan.
If things did go to plan, you can, of course, run a new experiment where you split your budget 80/20. And if that works, 70/30. Mind you, that takes time as well. Google recommends running split tests for about 45 days, so if you have that time to spend running slightly different variations of the same experiment, then feel free.
What You Should Be Testing (and What’s a Waste of Time)
Worth testing:
- Bidding strategy
- Ad schedule
- Copy types
- Location targeting
- Match types
- Goal settings
- Audiences
Not worth testing:
- Minor tweaks when your campaign’s already performing well
- Changes you can’t define a clear goal for
- “Just want to give it a refresh”
If it’s not tied to a business objective or backed by a clear hypothesis, it’s probably just tinkering.
Data Can Be Misleading If You Don’t Know What You’re Looking At
Keep in mind that when analysing data, you might only have 80% of the picture. That last 20% could explain why something looks like a loser is actually supporting your highest-converting campaign.
Don’t make decisions on incomplete data. And don’t overanalyse. Set your goal, run the test, look at the outcome – and make sure your conversion tracking is properly configured so the numbers actually mean something. Enhanced conversions are definitely recommended.
Real Example: Demand Gen vs Demand Capture
One of our best-performing tests flipped the brief on its head. Instead of targeting people searching “tender writer,” we went after people searching for tenders – earlier in the decision journey. The result? Same spend, better leads. That one insight shifted the whole campaign strategy.
Same story with a plumbing campaign. Testing “plumber in Melbourne” vs “how to fix a leaky tap.” The second one brought in people actually looking for help – not just comparing quotes. That’s demand gen.
💡 Want to give it a try yourself?
Why not run an experiment on your messaging? Don’t target what you do – target the problem you solve.
Instructions on how to create a demand gen campaign from Google Ads.
Your Agency Should Be Testing – And Telling You About It
A decent agency runs tests. A great one tells you what they’re testing, why, and what to expect. Even if the test tanks, you should hear about it.
If your agency doesn’t do that? Here’s how we approach PPC at Stark Digital.
No guesswork. No gimmicks. Just digital marketing, done well.