accelerated marketing experiments

Do you like baseball, movies, Brad Pitt, and metrics?

I realise this is a random question but stay with me …

If you’ve seen the movie Moneyball, you’ll be familiar with one of the greatest sporting stories of all time. And while Brad Pitt does a great job in the main role, the real stars of the show were the data and metrics that form the foundations of the entire adventure.

The Hollywood classic is based on the true story of Billy Beane who used the Moneyball Theory to manage the Oaklagooglend Athletics through a 20-game winning streak. The team clinched the American League West, despite losing three of its highest value players at the start of the season.

Operating on a minimal budget, Beane teams up with an analytics expert to recruit a team of misfit players based solely on two key performance metrics, slugging percentage and on-base percentage. This idea was geared towards guaranteeing runs at the end of a season – and as we know, runs equal wins, and wins equal titles. Beane’s recruitment process was based purely on a statistical hypothesis, and became one of the greatest sporting experiments in history.

What we learnt from this real-life event is that by understanding the key performance metrics that dictate success you can push to be the very best, even with a limited budget.

This article explores the benefits of running accelerated marketing experiments to test hypotheses and support key decision making. It will help you operate at the forefront of your industry and strive for greatness.

Batter up!

Table of Contents

Objectives Of This Guide

This article will help you understand:

  • the hidden potential of metric-based experiments to identify whether your current marketing campaigns are performing as well as they could be
  • the undisputable value of data when making informed decisions that can drastically improve your campaigns or identify problems that need immediate attention
  • metrics that matter when launching a new experiment
  • how to set up these metrics to track the relevant information you need.

By the end of this article you will learn:

  • the type of experiments that can identify ways of improving the efficiency of your campaigns
  • a formal process to set up the framework for marketing experiments to help you test and prove or disprove your hypotheses
  • how to be brave and bold with getting the most out of your experiment
  • when to run an experiment, how often, and what you should be expecting to find from your results.

What’s The Problem?

A lot of companies don’t run marketing experiments due to a variety of reasons.

  • They don’t see them as a priority.
  • They don’t have the resources or in-house experience to run them properly.
  • They base their ongoing campaign effectiveness purely on company-wide enquiries and sales data.

In some cases, agencies or internal marketing teams are telling organisations that their campaigns are already performing at their maximum capacity. But without running an experiment to confirm this, there’s no way to double check if they are missing out on potential benefits or opportunities.

There is a risk with all these cases that the company is flying blind, making random guesses without the help of real-world data. Put simply, data helps to refine your decision-making process and identify areas of opportunity within the marketplace.

If you don’t have data …

  • you can’t create marketing forecasts to predict or adequately prepare for the future, leaving your business open to greater risk
  • management will make decisions to drive the direction in a modern competitive marketplace without accurate data, which can be extremely damaging to business performance
  • the fear of the unknown “What if it doesn’t work?” could stand in your way and affect potential growth.

Even if you do have data …

  • your implementation time may be too slow, and they need to be fast to be worthwhile
  • it can be unclear and misleading due to using the wrong match type settings from online resources such as Google Keyword Planner
  • it may not always apply to your specific circumstance, as metrics and conversion data can differ across different companies and industries.

It is a capital mistake to theorize before one has data. — Sherlock Holmes

Why Does This Matter?

Accelerated marketing experiments are crucial. They have the potential to underpin predictable, reliable, and massive business growth. Experiments also have the ability to test and confirm the validity of a hypothesis.

Since a hypothesis is just a theory until proven, the definitive nature of data is key.

In the highly flexible world of digital marketing, key metrics act as markers for regular campaign optimisation. Performing regular experiments can ensure you remain agile and nimble, while reinforcing the efficiency of your ad spend.

Marketing experiments can and do result in successful outcomes. They give you confidence in your investment, backed by the knowledge of the market testing.

Even if your experiment fails, or delivers less than ideal results, it will give you peace of mind, context, and perspective in a commercial sense. This is still a win as all of these are essential for clear decision making.

 You can have data without information, but you cannot have information without data. — Daniel Keys Moran

What’s The Solution?

Accelerated marketing experiments can help verify and test hypotheses in the real world. They give you definitive clarity around what is and isn’t working.

Setting up a marketing experiment involves:

  • having a clear understanding of your business and marketing objectives
  • deciding on the type of experiment you will run
  • ensuring you have an adequate sample size to test
  • having the right web analytics configured to measure, track, and analyse

A theory can be proven by experiment; but no path leads from experiment to the birth of a theory. – Albert Einstein

When setting up your experiment, stick to a consistent and simplified process. This allows you to go through each step before you reach your final objective.

The steps below show the process we use when implementing experiments for our clients.

Step 1: Create a hypothesis and establish the objectives of your experiment

When testing, comparing or troubleshooting, start with an achievable hypothesis that stems from your marketer’s intuition. To help decision making or campaign optimisation, your hypothesis will need to be able to be proven or disproven via quantitative data or metrics.

It needs to be both reasonable and measurable within resources and time constraints, consider:

  • budgets
  • time frames
  • staff resources.

Also consider other parameters. For example, to determine the most profitable outcomes, you may wish to run the experiment only on the busiest weeks of sales activity.

To accurately measure and analyse the impact that your experiment has, test just one element or hypothesis at a time. Multiple experiments may muddy your data.

marketing experiment

Step 2: Gather resources, research, information and data to refine your hypothesis

Once you have established your hypothesis, you’ll need to pull together all available resources to ensure you are ready to launch. This can include:

  • human resources
  • software
  • data sets
  • any other information needed to help prove or disprove your hunch.

Look into analytics and undertake extensive research to understand exactly what you are going to test, and how. Your initial hypothesis may change at this stage based on the information you uncover.

marketing experiment hypothesis

Step 3: Outline your campaign metrics and key performance indicators for tracking

To reach an outcome, experiments rely heavily on metric-based findings, such as improved conversion rates that result in a change to campaign messaging. It is essential to set up tracking to measure and record the key metrics and key performance indicators (KPIs) you will need to form your conclusions.

If you’re running an ecommerce campaign and want to track sales or revenue data, for example, you can do this through setting up goal tracking on your website. Platforms like Google Analytics and Google Tag Manager allow you to measure what sources, campaigns, pages, and experiments translate to revenue.

Establishing your ideal “win rate” is a good start. This will help you identify the kind of variation you’re looking for from the experiment findings. For example, is a 5% improvement in conversion rate a success, or do you want sharper increases of 20% or more?

Focus on the data that is aligned to your business and marketing objectives, like sales activity, as opposed to vanity metrics, like Facebook likes. The latter provide little value to decision makers when analysing results.

Rather than focusing on total sales, you may choose to focus on cost per acquisition (CPA) to reduce the amount spent per lead. This reduction will ultimately help you become more efficient with your online advertising budget. Your business will be able to generate more sales and revenue from less ad spend.

Step 4: Design all elements of your marketing experiment and roll it live!

This is where you jump into the trenches to set up your experiment. Determine where you are going to run these tests, and what you are going to test. Some examples include:

  • Use Google Ads Experiments to randomly send traffic to two different conversion landing pages on your website, with a 50:50 split, and see which has a higher conversion rate.
  • Use Facebook Ads to create two different ads with the same campaign and ad set information, testing whether feature-focused or benefit-focused text performs better in driving leads on the same landing page.
  • Use a Google Optimize A/B Test to change the length of a form on a specific web page to test whether a quick enquiry or detailed form results in more leads.

The secret to setting up the right experiment is having a clear understanding of what you want to achieve, and then leveraging the best platforms to assist in your collation and analysis of key data.

If you don’t know where to begin, check out our list of Types Of Accelerated Marketing Experiments further down this article, supported by a list of Experiment Examples we’ve carried out for our clients. We’ve also provided you with a list of Software To Get You Going to equip you with the tools to easily set up an experiment with metrics that will help you prove or disprove your hypothesis.

If you have set up tracking on your website through a platform like Google Analytics, it’s important to make a note of the day you start your experiment. This allows you to refer back to this date when analysing the final data. Notations can be a useful visual guide when measuring the impact of these changes on your website, and business, performance.

Be sure to test your experiment internally before going live. This will help uncover any teething issues and ensure you are ready to pull the trigger once the time is right. Once everything is set up, tested – and tested another 10 times just to be sure – it’s time to take your marks … get set … and go!

Step 5: Analyse and interpret your metrics based on campaign objectives

Rather than a “set and forget” approach, it’s best to regularly check on your experiment. This will give you some early insights into activity and allow you to identify any potential problems.

Depending on your objectives and the experiment itself, data from the first 24 hours can paint a very different picture compared with your final metrics. Keep a close eye on things at the very start of your experiment to identify any early learnings that may affect the final conclusions.

A word of warning – despite the first 24 hours being a crucial window in which to analyse initial findings, it’s also important not to change things too regularly. This may skew your data and not give it enough of a chance to run its course. Rather than making multiple changes, try adjusting one thing at a time, making an annotation, and then taking a step back to analyse and measure the results. Give the data time to form peaks, troughs, and emerging trends from which you can draw and formulate statistically significant conclusions.

Continue to cross check your metrics with your experiment objectives to make sure you are accurately tracking all relevant data ahead of analysis.

For example, when referring to our earlier hypothesis of split testing the words ‘submit’ versus ‘enquire’ on website form buttons, we want to ensure we are tracking enquiry form submissions on both forms, within the same time period, from the same sources. This way we can accurately measure and determine which word tracks best with customers.

It will also be worth noting the conversion rate of website landing page visits, calculated based on the number of website visits on the page divided by the number of conversions. This will ensure your data set considers the number of page views when determining the most popular call-to-action text per form.

Step 6: Draw your conclusions and prove/disprove your original hypothesis

Ok so this is the fun part!

Once you have all your experiment data in front of you, it will quickly become apparent whether or not your initial hypothesis is accurate.

Make sure you fully understand your metrics so you can draw the right conclusions from the data, cross checking the final outcomes with the initial objective. It is essential to have enough statistically significant data from which to base your final conclusions, showing the importance of setting the right campaign metrics and KPIs in Step 3.

If the results surprise you by disproving your hypothesis, don’t take it personally. Instead, view the experiment as a success. You now have the data, perspective, and context to improve or refine your ongoing marketing activity.

Let’s look at some metrics from our earlier experiment where we tested the words “submit” and “enquire” on two different contact forms on two separate website pages to see which button text generated the most conversions.

To differentiate the two pages, one was a ‘Quote’ page showcasing details of the service without any pricing. The website form was a lead magnet for people to submit a request for a quote. The second form tested was from the website’s ‘Contact’ page, with company contact information above the form, asking for customers to leave their details for any enquiries. There were two variations of each form, with the only element changed being that call-to-action button at the end.

submit and enquire forms

contact page

This experiment was done over a two-week period of high sales volume. We measured the main metric, the number of conversions per form, using Google Analytics.

Once the experiment had ended, we drew some clear conclusions from what the data had told us.

marketing split testing

  • The most popular call-to-action word to use on both forms in general was “enquire”, which generated three more enquiries over the fortnight.
  • Despite the word “enquire” being more popular overall, users on the ‘Quote’ page preferred having the word “submit” as a call to action.
  • Given the nature of the ‘Contact’ page, the word “enquire” was by far more popular in eliciting a response, generating almost triple the conversions when compared with “submit”.

You would normally want statistically significant data to confirm what you need to do to improve your advertising or website performance. However, speed is also important when conducting experiments, as you need to be able to move quickly and dynamically within a constantly changing marketplace.

Rather than taking a long time to set up an experiment and gather the results, conducting a lean experiment allows you to formulate conclusive data more regularly. This means you can constantly optimise your campaigns to better achieve both short- and long-term marketing goals. Learn more about the power and importance of moving fast to implement marketing strategies and experiments in our Quick Reaction Marketing Force article.

As mentioned above, an acceptable “win rate” will give you a benchmark to work towards.  If your experiment shows a 5% increase in conversions, is this a significant enough variance to consider the experiment a “success”? Or are you looking for more notable changes in data before you consider adjusting your campaign strategy or website features in light of the data?

You want enough data that will give you confidence in your future decisions, but that will also be received within a short amount of time.

Step 7: Adjust and refine your activity based on the experiment findings

From your conclusions in Step 6, use the data to change and adapt your current campaign, with a view to address the experiment’s initial objectives.

Make as many changes, significant or minor, and know that experiments can often uncover more findings than initially planned. View this as a critical stage in adjusting your current activity to get the best results. Remember to only test one change at a time and make sure your data is as statistically significant as possible.

In the case of our marketing experiment, we wanted to identify the most appropriate button text out of two options for optimal form conversions. The findings, however, uncovered some additional data which we can use to adjust the website.

  • Change the button text on the ‘Quote’ page form to show the word “submit”. This is what customers prefer, most likely due to the type of page, so the call to action is more definitive and makes more sense.
  • Update the button text on the ‘Contact’ page form to show the word “enquire” as customers at this section of the website prefer a softer call to action.

Step 8: Repeat the process for optimal campaign results

You don’t need to only run experiments when something is broken or underperforming. You can and should use them regularly to constantly adapt and optimise your campaigns, helping you to stretch more from your marketing resources.

Refer to your findings from Step 6 to uncover any anomalies in the data that you may need to test further. Focus on these and repeat the process.

The best marketers are the ones constantly looking to test, refine, and repeat. Customers, markets, and external factors are always in a state of flux, and your marketing should follow the same trajectory.

In our example, the next step could be to adjust the wording on both website pages to be specific to the language used in the call to action. With slight tweaks to headings, wording and phrasing, conversion rates on both pages could improve, based on your findings from the initial experiment.

So now you’re a pro, you can head back to Step 1 and begin your next experiment.

Types Of Accelerated Marketing Experiments

While we’ve hinted at some forms of marketing experiments, the examples below give you an idea of the diversity of tests you can implement.

A/B split tests – an even test split between two variables targeting the same audience:

  • Compare male and female chatbot faces to see which best resonates with enquirers.
  • Alter the complexity of your website contact form – g. Less detailed vs complex.
  • Evaluate different call to action prompts on buttons – e.g. Submit, Enquire, Contact.

Multivariate tests– similar to A/B testing but with multiple variations of parameters:

  • Test different headlines in ad copy whilst using the same imagery and ad creative.
  • Explore different value propositions, such as discounts, offers, and lead magnets.
  • Introduce ad schedules at specific times of the day or week to test engagement levels.

Redirect tests – split testing of two different web pages with the same audience:

  • Testing bounce rate between two different ‘Contact’ page designs.
  • Measuring conversions between two varying white paper landing page designs.
  • Analysing website exit rates between two different ‘Thank You’ page designs.

Email marketing tests – various tests to enhance engagement with your database:

  • Trial different headings, buzz words, and tones in the subject line of email campaigns.
  • Try using different salutations in email introductions– g. Hello, Hi, Hey, Howdy, Yo!
  • Explore the difference in language from casual tones to professional expression.

Website experiments – testing to improve website user experience and user engagement:

  • Restructure your pricing page to either show or hide your prices.
  • Introduce pop-up forms on your website to trial new prompts for lead generation.
  • Use different landing page lengths to determine how much detail visitors want.

Design tests – experiments using different design aesthetics to boost engagement:

  • Compare the use of feminine versus masculine design styles in different ad creative.
  • Change ad creative while using the same text.
  • Adjust your campaign or brand colours on your landing page to test engagement.

Google experiments – varying tests to optimise conversions through paid Google Ads:

  • Add a schedule to your Google Ads campaigns to test against continuous marketing.
  • Experiment between refined and broad targeting when launching new campaigns.
  • Target males in one campaign and females in another to identify lead quantity and quality.

Social media experiments – varied tests to ascertain efficiency in paid and organic content:

  • Analyse different bidding strategies in Facebook to ascertain your optimal settings.
  • Adjust the amount of text in your social media ads to determine optimal design styles.
  • Run the same Facebook Ads to different audiences to identify ideal demographics.

Content experiments – tests to improve engagement, conversions and brand positioning:

  • Diversify your blog titles to test different engagement levels with your articles.
  • Trial different white paper titles to see which ones generate the most downloads.
  • Write adverts using different points of view – e.g. First, Second, and Third person.

Other campaign experiments – general experiments to improve campaign efficiency:

  • Launch the same campaign in different months to refine your marketing calendar.
  • Run webinars in the morning, afternoon or evening to determine optimal event times.
  • Run promos with multiple prizes compared to one “grand prize” for maximum entries.

As you can see – if you can measure it, you can test it.

 Torture the data, and it will confess to anything. – Ronald Coase

Real-Life Experiments

Example 1: Split Testing Landing Pages - 80% increase in conversions

The client, an established provider of elective surgery, was running an ongoing Google Ads campaign to promote its services. When searchers clicked on the ads, they were taken to a standard landing page with information about the clinic and a contact form to register an enquiry. We wanted to run an experiment to test if there were ways of improving the conversion rate of visitors to the page.

Setting Our Hypothesis
Based on a similar test for a different client, we initially thought a shorter landing page with less text and clutter to help increase chances of prospective patients submitting their contact details via the form.

Outlining Our Objective
Our objective was to test two landing pages, a succinct option with just bare essential information, and one with a lot more detail about the clinic. Our main campaign metric was focused on conversions and conversion rate per landing page.

Setting Up The Experiment
We designed both landing pages and ran an ad variation test experiment through Google Ads to split traffic evenly between both pages. Thanks to a feature within Google Ads, we were able to send an even split of browsers to each page from the same campaign, to keep all other factors consistent.

Outcome & Conclusions
The result was a higher conversion rate and more conversions on the longer landing page. This was likely due to the highly personal nature of an elective surgical procedure, with prospects responding better to more informative, detailed pages. We then adjusted our Google Ads campaign to send more browsers to the longer landing page, while keeping some traffic going to the shorter landing page to continually run the experiment.

This resulted in an 80% increase in conversions within a month.

Side note: We have run the same experiment with clients across multiple industries, such as pet care and the finance and accounting sector, where visitors preferred the shorter, more succinct version of the landing page. This shows how experiments can have drastically different outcomes depending on factors such as your industry, or the type of product or service you’re offering.

Example 2: Investigating Form Conversion Rates - Conversion rates up 35%

We worked with a prominent financial services provider to optimise a free credit score check function on its website, set up as a lead magnet campaign to get customer data. While the free offer was expected to take off, the initial launch resulted in a low conversion rate of just 1%. It was clear something wasn’t adding up (pun intended).

Setting Our Hypothesis
We initially looked at Google Ads and Google Analytics, neither of which highlighted any issues. We then considered that either the offer itself wasn’t strong enough, or that there was a potential problem with the form or sign-up process.

Outlining Our Objective
The objective was to improve the conversion rate of the landing page by investigating the sign-up process. This would identify any issues that may be preventing customers from filling out or completing the form. Given the high potential search volume for these services, we were confident we could improve conversions while also working towards a more efficient CPA.

Setting Up The Experiment
Since Google Ads and Google Analytics didn’t uncover the issue, we used Hotjar to record website browsing sessions to identify potential adjustments for a more user-friendly form. We then began recording videos of how each individual user navigated the page.

Outcome & Conclusions
Once we had several recordings and began watching them back, we found that the predictive address function in the landing page form struggled to match entries with the address database. This translated to form errors. As there wasn’t a valid error message alert set up, customers were blissfully unaware that their information was not being received. When we addressed this issue, we immediately saw the conversion rate jump from 1% to 15%.

Additional investigation showed that the terms and conditions were too aggressive. By simply adjusting the Ts & Cs to be a little more subtle and friendly, we further improved the conversion rate up to 35%.

Side Note: When we first analysed the problem with Google Analytics and Google Ads, we could see that customers were arriving on the site but not converting. By introducing Hotjar, we were able to add another layer of analysis to our experiment to determine the exact movements of browsers. This helped the specific issues. This was faster than re-developing the offer or landing page entirely, demonstrating the importance of aligning your experiment with the right software or tools.

Example 3: Optimising Webinar Registrations - Leads increased 216%

Our team collaborated closely with a boutique accounting firm to help launch a series of webinars to engage with potential customers and to get contact information for email nurture campaigns. The webinars were structured around helping clients make more informed business decisions. Our goal was to help drive registrations at a desirable CPA.

Setting Our Hypothesis
We believed the event topic would be attractive to a wide demographic of potential clients, which we could target through specific experiments using Facebook targeting. Since the experiment was running during a time of social distancing from COVID-19, Facebook gave us a strong marketing channel for registrations due to the increasing amount of time users were spending online, particularly on social media.

Outlining Our Objective
We wanted to constantly implement and test new segmented audiences to find which demographics yielded the most registrations at the best CPA. From there we hoped to use customer profiling to further refine our ad spend efficiency while also identifying the best markets to go after for future events.

Setting Up The Experiment
Our team began with conducting research into what these demographics looked like. We started by deploying a small number of audiences in our campaign targeting, and earmarking the high performers. We also ran experiments around male versus female targeting at a campaign level, as well as specific industries matched to the ideal customer profile for the client.

Outcome & Conclusions
By starting broad with our targeting, we were able to isolate who resonated the most with our ads. This meant we could fine-tune our messaging and targeting, drastically increasing the number of conversions. After two days of the campaign we had 12 registrations.

Once we completed a series of campaign optimisations based on the above learnings, this number quickly jumped to 38 just two days later. We achieved this 216% increase with a modest CPA, helping to refine and establish our target demographics for future webinar presentations.

Side Note: The above experiment represented just a part of the campaign and focused on Facebook specifically. We also ran additional tests to measure the conversion rate of event registrations on different conversion pages, testing an Eventbrite event page next to a landing page on the website. From the experiment we found that the Eventbrite page outperformed the website landing page by double – a good indicator that clients want an easy sign up process that involves less clicks and time invested.

Example 4: Testing Ad Engagement On Emerging Platforms

To remain at the forefront of the digital marketing industry, we decided to research an array of new and emerging social media channels to discover alternatives to popular mediums such as Facebook, Instagram, LinkedIn, and Twitter. We wanted to see which platforms proved to be the most effective in generating strong user engagement, measurable and quality conversions, and more efficiency with ad spend.

Setting Our Hypothesis
We felt the “big four” social media channels were saturated with advertising. There were other platforms to explore to help us engage with new markets and deliver notable results with limited resources.

Outlining Our Objective
Our objective was identifying the most relevant emerging or alternative social media channels that would help us achieve measurable results within a short amount of time. We wanted to ultimately improve campaign metrics such as CPA or return on investment (ROI).

Setting Up The Experiment
After researching an extensive number of emerging and alternative social media platforms, we decided to run a series of experiments on Quora, Reddit, Pinterest, and Snapchat for clients in a range of industries. We set a limited budget and a short window period of just one week. To test the useability of these platforms, we measured factors such as set up times, ease of use, platform interactivity, and optimisation capacity. Our main focus was analysing key campaign metrics such as conversions, user engagement and ad spend efficiency.

Outcome & Conclusions
We were able to achieve a CPA of $100 for a campaign in Quora compared with a $148 CPA in Google for the same campaign, with Reddit and Snapchat delivering strong results within the week and Snapchat achieving a cost per swipe of just $0.40.

The experiment uncovered a lot more than this. Alongside an ability to reach new customer markets, we equipped ourselves with the full suite of emerging and alternative platforms available to marketers, helping us, and our clients, remain on the ball and at the forefront of digital marketing.

You can learn more about this experiment in Experimenting With Snapchat, Quora, Reddit, & Pinterest, which breaks down the exact budgets, campaign metrics, and findings for each platform.

Software To Get You Going

A variety of online tools and software can help in your accelerated marketing experiments, and the good news is, most are free! These programs all serve different purposes, with the underlying objectives to track, measure, and analyse performance to help different aspects of experiments.

  • Google Analytics provides comprehensive website data to monitor the inner workings of your website, giving you the ability to analyse and measure your conversion activity, traffic channels, user demographics, and much more.
  • Google Optimize is a free website optimisation tool which easily integrates with Google Analytics and allows for effortless experiments on your site such as A/B testing, multivariate testing, and redirect testing.
  • VMO and Optimizely both provide the ability to run A/B split tests to measure the performance between varying forms, surveys, landing pages, and more to optimise your campaign results.
  • Hotjar is a handy tool for recording website sessions from your browsers, with the ability to visually record and analyse browsing behaviour, user flows, and heat maps to give you an insight into exactly what happens when browsers land on your website.
  • There are also paid models such as Growth Hackers Experiments that allow you to base a series of hypotheses on different criteria such as impact, ease, objectives, goals, and much more.
  • Google Ads also provides the ability to set up experiments and tests within its platform to help you split test campaign results to optimise your paid campaigns.

A quick Google search will uncover an array of additional software available to digital marketers to assist in varying types of online experiments.

 No great marketing decisions have ever been made on qualitative data. – John Sculley

What Are The Benefits?

Gone are the days of ‘set and forget’ campaigns.

While we may think we know how things will go, there is nothing quite like quantitative data to confirm or disprove our theories, and to uncover moments of clarity in the market space.

We live in a world that changes as quickly as the weather. It’s never been more important to have the knowledge and capacity to run your own experiments. You can sharpen the pencil on your campaigns and squeeze the most out of your budget.

Having the ability to set up, run, and analyse experiments within a short time amount of time lets you remain as dynamic as the market. You can pivot and adapt your campaigns in real time.

As we’ve explored in articles such as Surge Marketing Deployment: Scaling Rapidly To Attack The Demand Wave and Quick Reaction Marketing Force: The Power Of Rapid Response, speed is the key to getting ahead of your competitors. Being able to run experiments quickly and efficiently is what will give you an edge in the market.

Experiments don’t need to be perfect. They don’t even need to be detailed nor complex. They just need to give you the quantitative data to support your strategic decision making, with the goal of driving your business towards new heights.

Information is the oil of the 21st century, and analytics is the combustion engine. – Peter Sondergaard (Gartner Research)

What Should You Do?

Experiments can be the secret ingredient to unveiling your full potential within the competitive business landscape.

If you know when you can discover from a well-positioned experiment, you will begin to experience a new level of clarity around which experiments may apply to your business, industry, or campaign.

Once you establish this, you can start running experiments across your campaigns to identify areas of opportunity for improvement and refinement. You can either do this yourself, with the help of this article, or you can use your newfound knowledge to try your own experiment.

Or you could reach out to an established marketing partner to help strategise, set up, implement and quickly measure your experiment.

If you choose the latter, we can help!

Our experienced team has handled a variety of experiments across many industries. We thrive on helping our clients refine their marketing campaigns to generate the very best results.

If you need help with setting up your next experiment or you are interested in exploring how best to optimise your campaigns, contact us today!

FAQs

What if I don’t have a separate budget for setting up and running an experiment?

You don’t need a separate budget. You can simply reallocate part of your media spend for an experiment. Large companies often throw significant pots of money towards outdoor media campaigns or sponsorship packages that don’t provide any measurable data to verify value. By simply cutting a newspaper ad, or reducing your radio ad budget, you can easily free up some budget to invest into an experiment. This will give you raw metrics and marketing insights you can use to optimise your campaigns going forward.

There are so many experiments to choose from, how do I know which one to run?

It all depends on your objective. Do you want to increase the number of conversions from your website? Run some split test experiments on differing landing page lengths, differing form designs or changing the call to action buttons to test different wording. Are you wanting to increase traffic to your website? Introduce a series of ad designs across several platforms to find which ones convert the most, which audience demographics are the most engaged and what prompts work best in generating click-throughs.

I’ve got so many questions in my head but how do I formulate them into an experiment?

Start with a hypothesis you may have around a segment of your marketing activity. Work your way backwards to think about how you can find the data to prove or disprove your theory. This is the basis to any experiment and can be a great first step. What is that burning question you’ve had in your mind for months? Aside from sales data, which metrics determine if you’ve had a great month or just an ‘ok’ month of revenue? If you were able to strip back your marketing machine to the essentials, which parts of the machine aren’t running as efficiently as they could be? All great questions which have answers if you dare to dig beneath the surface.

Ok so I’ve run an experiment, I’ve got some good data but now what?

Data is the soul food for any digital marketer. Without the understanding of how to adapt or optimise your campaigns with newfound data, your experiments may prove useless. You need to be able to tweak and adjust your campaigns based on your experiment findings. This can range from adjusting your messaging, design aesthetic, call to actions and website user experience, through to simply changing your ‘hook’ or online offering. Sometimes you can be a bit too close to the business to understand how you can leverage experimentation data, so it can be useful to employ a third party to handle your optimisation for you. And we can be that third party!

Reach out to our friendly team today to discuss how we can get more out of your website and marketing. Contact us now.