Best Business Search

Tag: Testing

How a testing model is driving SEAT and CUPRA’s search marketing performance

June 23, 2022 No Comments

“Will we ever be able to put search marketing strategy in the driver’s seat?” This is almost every search marketer’s dilemma as the community continues to remain at the mercy of Google’s algorithms and updates.

SEAT S.A, the Barcelona-based multinational automaker part of the Volkswagen group have innovated a testing model that is driving growth for its brands, SEAT and CUPRA in the European market. While SEAT is the young, cool and urban brand that offers cars with striking designs and several mobility solutions – CUPRA is an unconventional brand, which is defined by its progressive design and the performance of its electrified models.

How a testing model is driving SEAT and CUPRA’s search marketing performance

We spoke with Corinne Calcabrina, Global Media Manager at SEAT S.A, Sophie Santallusia, Global Paid Search and Programmatic Director, and Alejandro Sebastian, Global Search Team Lead at PHD Media Spain to discuss the ‘Performance innovation program’ (SEAT S.A’s testing model) and its value add to the businesses.

A fast-paced industry

Digital is a fast-moving sector and search is always reinventing itself with new formats and everchanging ways to create and manage accounts. The teams at SEAT and CUPRA had several pain points:

1. Staying on-top of all innovations and changes in the industry

“We needed to become first movers who actively capitalize on opportunities that appear. To ensure this our teams needed to take advantage of search space dynamics, apply best practices, and gain a technological and intelligence edge over the competition.”

– Corinne Calcabrina, Global Media Manager at SEAT S.A.

2. Improving visibility of the team’s hard work

“While we were putting all these efforts, we wanted to improve our team’s visibility. While we are busy becoming the best performing channel, always reinventing, working towards results and efficiencies, we often miss the glitter of other channels. Adding an official scope and framework means we get to report and showcase our achievements.”

– Corinne Calcabrina, Global Media Manager, SEAT S.A.

3. Maintaining performance and improving efficiency

“As the best performing channel on a last-click attribution model, we were also facing multiple challenges. The pandemic lockdowns and microchip shortages made search performance improvements a constant, ongoing must-have. This meant decreasing the cost per click (CPC) and improving the cost per acquisition (CPA) were always core reasons to develop such a testing model.”

– Corinne Calcabrina, Global Media Manager, SEAT S.A

Putting testing in the driver seat: The SEAT and CUPRA Performance innovation program

The SEAT S.A testing model, ‘Performance innovation program’ was designed to align with the inherent love for innovation that runs at the core of SEAT and CUPRA brands. The testing model was built centrally to maintain brand focus on the strength of paid search – improving cost efficiencies and accelerating performance.

Corinne and her team at SEAT S.A and their agency, PHD Media reviewed brand strategies for SEAT and CUPRA respectively, their performance, and local needs. They created a framework that provides structure, helps the brands expand their market share, and deliver central visibility on the testing results. They created specific testing roadmaps, based on quarterly goals that align with local markets based on their needs and strategies.

“We then applied our tests, sharing the hypothesis (highlighting results from other markets) of what we hope to achieve and then applying the test into the main strategy.

“We had a clear timeline and roadmap. We always test and learn. This allows us to have a specific position with partners, allowing us to always be part of the alphas and betas, testing new formats, always trying to improve results at the same time”, Corinne shared.

To facilitate consistency the SEAT S.A team organized tests throughout the year pacing one test at a time for an ad group or campaign to maintain efficiency and gain clear observations. The roadmap was created on these factors:

  • Priorities for markets based on the impact and workload
  • Changes that Google makes to ad formats or different features that it sunsets or iterates

The search marketing grand prix: data, automation, and visual optimization

SEAT S.A and PHD Media started differentiating strategies by keyword type and defined them for each ad group. Keywords were segmented based on brand and non-brand search, their role, and their respective KPIs. This data was then used during the auction bidding. Artificial intelligence (AI) was used to segment audiences and target ads that were top of the funnel. Comparative insights from these tests were later fed into the business to inform the direction of strategy.

To improve the click through rate (CTR) and lower CPCs, the SEAT S.A team focused on adding visuals to ads, improving ad-copies, and testing new extensions. They also decreased CPAs by using bid strategies and the system’s AI to get the best of their budgets.

To master their visual impact on audiences SEAT S.A used image extensions for each ad across all their campaigns. Google displayed these images based on multiple factors like clicks, content, and keyword triggers to optimize the best performing ones.

From a data point of view, in Search SEAT S.A used Google Search Ads (SA360) to manage and monitor their Google Ads and Bing Ads respectively. The data sets tracked all the core essentials of paid search:

  • Keyword conversion performance
  • Ad copies
  • Audience data through all the custom bidding options available in SA360

Outcomes

The ‘Performance innovation program’ model has helped SEAT and CUPRA achieve one of their best tests which catalyzed their search performance in terms of the cost per visit (CPV), one of their main KPIs that signaled top of the funnel conversions. The cost per visit (CPV) improved by 30% and cost per acquisition (CPA) improved by 37%.

SEAT S.A (SEAT and CUPRA) are now equipped with new ways to deduce and analyze conversions on a market-to-market basis.

Sharing intelligence across diverse markets

After completing the testing phase, the SEAT S.A team and their global partner PHD Media reported on results and observations. Sharing their learnings and insights with other markets has empowered other teams to benefit from the knowledge and expertise derived from the successful test prototypes. Focusing on components that drive results has allowed the teams spread across to be challenged and has facilitated constant learning while embracing changes and new features. The SEAT and CUPRA teams are now strongly positioned to outperform the competition.

Gearing up for a cookie less future

Going cookie less will bring challenging times and impact the search channel. SEAT and CUPRA plan to counter this with the use of Google Analytics 4 (GA4) to maintain performance and target the right audience. Opening up to new visual formats like Discovery campaigns and MMA/MSAN from Bing will also take an important place within search in the future, as the core of search might evolve with more automation, less granularity and control.

Greater focus on measurement and a privacy-first future

The team is testing ‘consent mode’ with GA4 and ‘enhanced conversion’ to estimate the attrition due to privacy guidelines. They are also focused on identifying and designing a risk contingency plan for the paid search elements that they won’t be able to test in the near future.

“We are testing all the new solutions and features that Google is bringing to the market in terms of privacy and cookie less capabilities. Particularly, our testing is focused on deploying the full suite of Google Analytics 4 (GA4), site-wide tagging, consent mode, and enhanced conversions.

Additionally, we are also testing new audience segments that GA4 allows within a privacy first ecosystem on our paid search campaigns. We are seeing some positive and promising results.”

– Corinne Calcabrina, Global Media Manager at SEAT S.A

SEAT S.A and PHD Media are actively focused on Google solutions for mapping markets and audiences that are privacy compliant and applicable for targeting segments.

They are also working towards gathering and connecting first party data like CRM audiences and customer match solutions.


Subscribe to the Search Engine Watch newsletter for insights on SEO, the search landscape, search marketing, digital marketing, leadership, podcasts, and more.

Join the conversation with us on LinkedIn and Twitter.

The post How a testing model is driving SEAT and CUPRA’s search marketing performance appeared first on Search Engine Watch.

Search Engine Watch


The Ultimate How-to Guide on Google Ads A/B Testing

May 10, 2022 No Comments

Learn all you need to know about Google Ads A/B testing and make sure your ROI starts trending in the right direction. The post The Ultimate How-to Guide on Google Ads A/B Testing first appeared on PPC Hero.

Read more at PPCHero.com
PPC Hero


Twitter is testing a new anti-abuse feature called ‘Safety Mode’

September 7, 2021 No Comments

Twitter’s newest test could provide some long-awaited relief for anyone facing harassment on the platform.

The new product test introduces a feature called “Safety Mode” that puts up a temporary line of defense between an account and the waves of toxic invective that Twitter is notorious for. The mode can be enabled from the settings menu, which toggles on an algorithmic screening process that filters out potential abuse that lasts for seven days.

“Our goal is to better protect the individual on the receiving end of Tweets by reducing the prevalence and visibility of harmful remarks,” Twitter Product Lead Jarrod Doherty said.

Image Credits: Twitter

Safe Mode won’t be rolling out broadly — not yet, anyway. The new feature will first be available to what Twitter describes as a “small feedback group” of about 1,000 English language users.

In deciding what to screen out, Twitter’s algorithmic approach assesses a tweet’s content — hateful language, repetitive, unreciprocated mentions — as well as the relationship between an account and the accounts replying. The company notes that accounts you follow or regularly exchange tweets with won’t be subject to the blocking features in Safe Mode.

For anyone in the test group, Safety Mode can be toggled on in the privacy and safety options. Once enabled, an account will stay in the mode for the next seven days. After the seven-day period expires, it can be activated again.

In crafting the new feature, Twitter says it spoke with experts in mental health, online safety and human rights. The partners Twitter consulted with were able to contribute to the initial test group by nominating accounts that might benefit from the feature, and the company hopes to focus on female journalists and marginalized communities in its test of the new product. Twitter says that it will start reaching out to accounts that meet the criteria of the test group — namely accounts that often find themselves on the receiving end of some of the platform’s worst impulses.

Earlier this year, Twitter announced that it was working on developing new anti-abuse features, including an option to let users “unmention” themselves from tagged threads and a way for users to prevent serial harassers from mentioning them moving forward. The company also hinted at a feature like Safety Mode that could give users a way to defuse situations during periods of escalating abuse.

Being “harassed off of Twitter” is, unfortunately, not that uncommon. When hate and abuse get bad enough, people tend to abandon Twitter altogether, taking extended breaks or leaving outright. That’s obviously not great for the company either, and while it’s been slow to offer real solutions to harassment, it’s obviously aware of the problem and working toward some possible solutions.


Social – TechCrunch


Testing the Power of the YouTube Lead Form Extension

July 25, 2021 No Comments

YouTube remains a strong platform and the lead form extensions make it all the more accessible. These are the exciting results of my recent lead form test.

Read more at PPCHero.com
PPC Hero


Testing platform Tricentis acquires performance testing service Neotys

March 30, 2021 No Comments

If you develop software for a large enterprise company, chances are you’ve heard of Tricentis. If you don’t develop software for a large enterprise company, chances are you haven’t. The software testing company with a focus on modern cloud and enterprise applications was founded in Austria in 2007 and grew from a small consulting firm to a major player in this field, with customers like Allianz, BMW, Starbucks, Deutsche Bank, Toyota and UBS. In 2017, the company raised a $ 165 million Series B round led by Insight Venture Partners.

Today, Tricentis announced that it has acquired Neotys, a popular performance testing service with a focus on modern enterprise applications and a tests-as-code philosophy. The two companies did not disclose the price of the acquisition. France-based Neotys launched in 2005 and raised about €3 million before the acquisition. Today, it has about 600 customers for its NeoLoad platform. These include BNP Paribas, Dell, Lufthansa, McKesson and TechCrunch’s own corporate parent, Verizon.

As Tricentis CEO Sandeep Johri noted, testing tools were traditionally script-based, which also meant they were very fragile whenever an application changed. Early on, Tricentis introduced a low-code tool that made the automation process both easier and resilient. Now, as even traditional enterprises move to DevOps and release code at a faster speed than ever before, testing is becoming both more important and harder for these companies to implement.

“You have to have automation and you cannot have it be fragile, where it breaks, because then you spend as much time fixing the automation as you do testing the software,” Johri said. “Our core differentiator was the fact that we were a low-code, model-based automation engine. That’s what allowed us to go from $ 6 million in recurring revenue eight years ago to $ 200 million this year.”

Tricentis, he added, wants to be the testing platform of choice for large enterprises. “We want to make sure we do everything that a customer would need, from a testing perspective, end to end. Automation, test management, test data, test case design,” he said.

The acquisition of Neotys allows the company to expand this portfolio by adding load and performance testing as well. It’s one thing to do the standard kind of functional testing that Tricentis already did before launching an update, but once an application goes into production, load and performance testing becomes critical as well.

“Before you put it into production — or before you deploy it — you need to make sure that your application not only works as you expect it, you need to make sure that it can handle the workload and that it has acceptable performance,” Johri noted. “That’s where load and performance testing comes in and that’s why we acquired Neotys. We have some capability there, but that was primarily focused on the developers. But we needed something that would allow us to do end-to-end performance testing and load testing.”

The two companies already had an existing partnership and had integrated their tools before the acquisition — and many of its customers were already using both tools, too.

“We are looking forward to joining Tricentis, the industry leader in continuous testing,” said Thibaud Bussière, president and co-founder at Neotys. “Today’s Agile and DevOps teams are looking for ways to be more strategic and eliminate manual tasks and implement automated solutions to work more efficiently and effectively. As part of Tricentis, we’ll be able to eliminate laborious testing tasks to allow teams to focus on high-value analysis and performance engineering.”

NeoLoad will continue to exist as a stand-alone product, but users will likely see deeper integrations with Tricentis’ existing tools over time, include Tricentis Analytics, for example.

Johri tells me that he considers Tricentis one of the “best kept secrets in Silicon Valley” because the company not only started out in Europe (even though its headquarters is now in Silicon Valley) but also because it hasn’t raised a lot of venture rounds over the years. But that’s very much in line with Johri’s philosophy of building a company.

“A lot of Silicon Valley tends to pay attention only when you raise money,” he told me. “I actually think every time you raise money, you’re diluting yourself and everybody else. So if you can succeed without raising too much money, that’s the best thing. We feel pretty good that we have been very capital efficient and now we’re recognized as a leader in the category — which is a huge category with $ 30 billion spend in the category. So we’re feeling pretty good about it.”


Enterprise – TechCrunch


How to perform SEO A/B testing in Google Search Console

January 26, 2021 No Comments

30-second summary:

  • SEO A/B testing allows site owners to understand whether the changes they make to their website have a positive impact on keyword rankings.
  • Anything on a website can be A/B tested, but the best variants to test are those that have a direct relationship to Google’s ranking algorithm.
  • How site owners can best execute an SEO A/B test will depend on the size of their website and their ability to easily roll back changes.
  • If an A/B test proves that a specific optimization is effective, a site owner can more confidently incorporate identical or similar changes to other pages of their website.

For SEO strategists, it is sometimes difficult to know which of the many changes we make to our websites actually impact their overall SEO performance. Moving websites from page 10 to page 2 can usually be accomplished by following SEO best practices, but the trek to page 1? That requires far more granular attention to the specific changes we make to our landing pages. Enter SEO A/B testing, one of the best ways to narrow in and understand the effectiveness (or ineffectiveness) of specific optimizations.

Many digital marketers are comfortable using A/B testing features in their PPC campaigns or to analyze user behavior in Google Analytics, but fewer are incorporating this powerful strategy to better understand which of their on-page optimizations have the greatest impact on keyword rankings.

SEO A/B testing can seem intimidating, but with the right tools, it’s actually fairly simple to perform. Not only does A/B testing help SEOs identify the most impactful optimizations, but it also gives them a way to quantify their efforts. Although SEO A/B testing is more often utilized by advanced SEO strategists, site owners who are comfortable working on the backend of their website have a great opportunity to elevate their SEO strategy through well-structured A/B tests.

What is SEO A/B testing?

When it comes to controlled experiments, testing two variants is the fundamental building block from which all other testing is built. A/B testing is simply measuring how a single variant impacts an outcome. In the case of SEO, the outcomes are either better, worse, or static keyword rankings.

It doesn’t take much to get started. You’ll need Google Search Console for one (the absolute truth of your SEO rankings), as well as two clearly defined variants you want to test out. Lastly, site owners need to have a dev or staging environment where they can save a version of their website prior to the SEO A/B test just in case the changes do not produce the desired results.

Best use cases for SEO A/B testing

Anything can be A/B tested on our websites, but for SEO purposes, certain site elements are more likely to result in keyword rankings improvements because of the weight they carry in Google’s algorithm. For that reason, the below elements are the best use cases for SEO A/B tests.

Title tags

Choosing title tags is so important and has a huge impact on search results. Title tag changes are very impactful from a rankings perspective because they directly influence click-through-rate (CTR). Google has a normalized expected CTR for searches, and if your landing pages continually fall below the mark, it will negatively impact your overall chances of ranking.

Meta descriptions

For those websites that already have a lot of keywords on page one and are therefore getting lots of impressions, A/B testing meta descriptions can be really beneficial. Like title tags, they directly impact CTR, and improving them can result in significantly more clicks and thus better rankings. 

Schema markup

If you can, it’s good to add schema markup to all of your web pages (you miss all the shots you don’t take!), but if certain pages on your site still don’t have schema.org markup, adding it can be a great use case for an A/B test.

Internal links

Internal links communicate to Google site architecture, and they also distribute PageRank across our websites. Getting internal links right can produce dramatic keyword ranking improvements, particularly for larger websites with thousands of landing pages. Focus on header and footer links because of how much they shift PageRank. For websites with product pages, you can use A/B testing to find the best anchor text for your internal links.

New content

Adding good content to your landing pages is always beneficial because longer content implies topical depth. Using a landing page optimizer tool can help you improve the semantic richness of your content, and a subsequent A/B test empowers you to measure whether Google positively responds to those quality signals. 

Large groups of pages

For ecommerce sites or those who may add a large group of pages all at once, you can use A/B testing to measure whether those pages are crawled and indexed in a way that positively or negatively harms your existing rankings.

Information architecture

There are certain elements of information architecture that are more specific to SEO. Google likes page experience features like jumplinks and carousels, so understanding the impact of adding these features to your web pages is another reason to perform an SEO A/B Test.

Site migrations

Whenever you make core technical changes to your website, A/B testing is a great way to measure how those changes impact keyword rankings. It also helps prevent your site from experiencing a significant rankings drop in the long-term. 

The types of SEO A/B testing

There are a few different types of A/B tests that you can execute on your website. It will largely depend on the number of landing pages you have as well as the category of variants you are testing out.

A then B

The most basic form of A/B testing, this type of test will simply compare the performance of a single page with one different variant. This type of A/B test is better for smaller sites and easier to implement, particularly if you’re confident that the changes are in the right direction.  

Multi-page

This type of A/B test allows for far more statistically significant results and can be executed on large sites with hundreds to thousands of landing pages. Instead of measuring a single variant on a single page, pick two page groups of similar pages (I recommend at least 50 pages per cohort) and change the variant on all of those pages.  

Multivariate

This common form of experimentation has the same core mechanisms of an A/B test, but it increases the number of variants being tested. Multivariate testing can be great for measuring user behavior, but it is less effective in measuring search performance when you’re trying to discover whether a specific optimization has a direct impact on keyword rankings. 

How to perform an SEO A/B test 

The easiest way to do A/B tests is by using site snapshots and rollbacks. Basically, you take a site snapshot, make the change to your site, and sit back for a week and watch what happens. If the change to the site has improved your SEO, you’ll see it in your benchmarking. If you’ve made a mistake, then you just roll back to a previous version of your site and move forward with different optimizations. 

Here is a simple step-by-step process of an SEO A/B test:

  1. Take a site snapshot or do a site backup prior to implementing changes so you can return to a previous version of your website if the change is not effective
  2. Determine the variant being tested (for example, title tag, schema.org, and the others) and make the change to a page or cohort of similar pages
  3. Wait 7-14 days to determine the impact of the change
  4. Compare the keyword rankings for the variant page/s to the original. This can be done in Google Search Console or a Google Search Console Dashboard 

If the optimizations are impactful, you can proceed with making similar changes to other pages of your site. More targeted changes like adding keyword-rich title tags and meta descriptions won’t necessarily directly translate to other pages. However, more technical optimizations like schema.org and information architecture can be implemented across the entirety of your website with more confidence if an A/B test proves them impactful.

Some practical advice for SEO testing across hosting environments

The way you proceed with step one will depend on your dev and staging environment. If you’re hosting on DigitalOcean, you can take site snapshots. If you’re on WPEngine, you can choose a site backup to restore from. For the best in class, try Version Control from Git, which allows you to roll back to any version of your website. 

Version Control is like Track Changes on a Word Document, but the history never gets deleted. With Version Control, even if you delete something or change it, there is a chronology to all of the changes that have been made — when a line of code was created, when it was edited — and you can roll back at any time.

It’s also important to make sure all of the in-development pages and test environment pages on your site have the robots no index tag. Some SEO specialists might tell you that adding the pages to your blocklist in robots.txt or with on-page rel canonical tags would be sufficient, but at LinkGraph I’ve seen numerous examples where pages or subdomains were added to robots and continued to be in search results for months. Just adding a canonical tag is insufficient at blocking the dev domain from crawling and indexing.

The best strategy is to use the robots no index. The even better strategy is to use both robots no index and rel canonicals for added protection.

How long does it take to know whether the variant was effective?

How long you wait to measure your A/B tests will depend on how often Google crawls your website. If you test parts of your domain that aren’t crawled often, you may have to wait longer in order for Google to actually see your changes. If you’re changing primary pages that Google crawls frequently, you can likely see whether those optimizations had any impact within 7-14 days.

As you evaluate the effectiveness of your optimizations, beware of confounding variables. Multiple backlinks in a short period of time, adding Javascript, and of course, algorithm changes, can sharply impact keyword rankings. Any type of experimentation is more accurate when you can eliminate variables, so do your best to not schedule A/B tests during link building campaigns, core algorithm updates, or any period of high search volatility. 

When done correctly, A/B testing can be a powerful way to refine your SEO strategy toward maximum results. Not only can A/B testing help site owners make more data-driven decisions, but it can also help SEO strategists prove the value of their work to clients or executive leadership who may be wary of investing in SEO.

Manick Bhan is the founder and CTO of LinkGraph, an award-winning digital marketing and SEO agency that provides SEO, paid media, and content marketing services. He is also the founder and CEO of SearchAtlas, a software suite of free SEO tools. You can find Manick on Twitter @madmanick.

The post How to perform SEO A/B testing in Google Search Console appeared first on Search Engine Watch.

Search Engine Watch


Reflect wants to help you automate web testing without writing code

July 26, 2020 No Comments

Reflect, a member of the Y Combinator Summer 2020 class, is building a tool to automate website and web application testing, making it faster to get your site up and running without waiting for engineers to write testing code, or for human testers to run the site through its paces.

Company CEO and co-founder Fitz Nowlan says his startup’s goal is to allow companies to have the ease of use and convenience of manual testing, but the speed of execution of automated or code-based testing.

Reflect is a no-code tool for creating automated tests. Typically when you change your website, or your web application, you have to test it, and you have the choice of either having your engineers build coded tests to run through and ensure the correctness of your application, or you can hire human testers to do it manually,” he said.

With Reflect, you simply teach the tool how to test your site or application by running through it once, and based on those actions, Reflect can create a test suite for you. “You enter your URL, and we load it in a browser in a virtual machine in the cloud. From there, you just use your application just like a normal user would, and by using your application, you’re telling us what is important to test,” Nowlan explained.

He adds, “Reflect will observe all of your actions throughout that whole interaction with that whole browser session. And then from those actions, it will distill that down into a repeatable machine executable test.”

Nowlan and co-founder Todd McNeal started the company in September 2019 after spending five years together at a digital marketing startup near Philadelphia, where they experienced problems with web testing first-hand.

They launched a free version of this product in April, just as we were beginning to feel the full force of the pandemic in the U.S, a point that was not lost on him. “We didn’t want to delay any longer and we just felt like, you know you got to get up there and swing the bat,” he said.

Today, the company has 20 paying customers, and he has found that the pandemic has helped speed up sales in some instances, while slowing it down in others.

He says the remote YC experience has been a positive one, and in fact he couldn’t have participated had they had to show up in California as they have families and homes in Pennsylvania.  He says that the remote nature of the current program forces you to be fully engaged mentally to get the most out of the program.

“It’s just a little more mental work to prepare yourself and to have the mental energy to stay locked in for a remote batch. But I think if you can get over that initial hump, the information flow and the knowledge sharing is all the same,” he said.

He says as technical founders, the program has helped them focus on the sales and marketing side of the equation, and taught them that it’s more than building a good product. You still have to go out there and sell it to build a company.

He says his short-term goal is to get as many people as he can using the platform, which will help them refine their ability to automate the test building. For starters, that involves recording activities on-screen, but over time they plan to layer on machine learning and that requires more data.

“We’re going to focus primarily over the next six to 12 months on growing our customer base — both paid and unpaid — and I really mean that we want people to come in and create tests. Even if they [use the free product], we’re benefiting from that creation of that test,” he said.


Enterprise – TechCrunch


Pandora launches interactive voice ads into beta testing

July 25, 2020 No Comments

Pandora is launching interactive voice ads into wider public testing, the company announced this morning. The music streaming service first introduced the new advertising format, where users verbally respond to advertiser prompts, back in December with help from a small set of early adopters, including Doritos, Ashley HomeStores, Unilever, Wendy’s, Turner Broadcasting, Comcast and Nestlé.

The ads begin by explaining to listeners what they are and how they work. They then play a short and simple message followed by a question that listeners can respond to. For example, a Wendy’s ad asked listeners if they were hungry, and if they say “yes,” the ad continued with a recommendation of what to eat. An Ashley HomeStores ads engaged listeners by offering tips on a better night’s sleep.

The format is meant in particular to aid advertisers in connecting with users who are not looking at their phone. For example, when people are listening to Pandora while driving, cooking, cleaning the house or doing some other hands-free activity.

Since their debut, Pandora’s own data indicated the ads have been fairly well-received, in terms of the voice format; 47% of users said they either liked or loved the concept of responding with their voice, and 30% felt neutral. The stats paint a picture of an overall more positive reception, given that users don’t typically like ads at all. In addition, 72% of users also said they found the ad format easy to engage with.

However, Pandora cautioned advertisers that more testing is needed to understand which ads get users to respond and which do not. Based on early alpha testing, ads with higher engagement seemed be those that were entertaining, humorous or used a recognizable brand voice, it says.

As the new ad format enters into beta testing, the company is expanding access to more advertisers. Advertisers including Acura, Anheuser-Busch, AT&T, Doritos, KFC, Lane Bryant, Purex Laundry Detergent, Purple, Unilever, T-Mobile, The Home Depot, Volvo and Xfinity, among others, are signed up to test the interactive ads.

This broader test aims to determine what the benchmarks should be for voice ads, whether the ads need tweaking to optimize for better engagement, and whether ads are better for driving conversions at the upper funnel or if consumers are ready to take action based on the ads’ content.

Related to the rollout of interactive voice ads, Pandora is also upgrading its “Voice Mode” feature, launched last year and made available to all users last July. The feature will now offer listeners on-demand access to specific tracks and albums in exchange for watching a brand video via Pandora’s existing Video Plus ad format, the same as for text-based searches.

 

Mobile – TechCrunch


How A/B and multivariate testing can skyrocket your social media conversions

July 5, 2020 No Comments

30-second summary:

  • Less than a quarter of marketers are satisfied with the conversion rates they achieve today.
  • A/B and multivariate testing help to put your website’s variables to work, with various text boxes, images, and call-to-actions capable of being tested among different audiences simultaneously.
  • Where A/B testing can perform trials of two ideologies, multivariate testing can display a wide range of varied elements to show users exponents are preferred by audiences.
  • Multivariate testing can help to optimize web pages based on the traffic arriving from different social networks.

Regardless of whether you’re aiming to foster more leads, email signups or purchases, there are few more effective ways to create more custom than through rigorous testing. 

Given the array of tools at the disposal of marketers, just 22% claim that they’re satisfied with the conversion rates that they amass. 

Fortunately, A/B testing and multivariate testing can seamlessly combine to fully optimize the process of turning your social media traffic into conversions. 

To understand the power of A/B and multivariate testing, it’s important to remember the array of variables that comprise your website. Text boxes, images, videos, call-to-actions and various multimedia plugins all combine to bring audiences an experience that aims to result in a conversion. Testing helps marketers to discover the exact combination of elements that are best placed to encourage visitors to act on the interest that encouraged them to navigate towards your site in the first place. The same practice can apply to just about any marketing approach. From social campaigns to PPC advertising. 

For example, have you arranged the images on your website in an effective manner? Or will visitors feel overwhelmed by the overflow of visuals? It’s virtually impossible to anticipate which layout will be the most effective in keeping visitors on your pages for longer – but actively testing different setups can offer up tangible insights into how prospective customers interact with the various elements that comprise your pages. 

It’s also possible that different layouts can lead to different effects. If your website’s tone is more informal, you might find that you’ll build more engagement with audiences, but could ultimately lack the sales you were aiming for. Whereas more formal imagery could create more purchasing intent but less of engaging customer experience. 

Different businesses will require different levels of website performance, and rigorous multivariate testing helps marketers to see what online features offer varied results for users. 

Adopting A/B and multivariate testing tools can help you to generate more leads, a higher volume of subscriptions, and ultimately attract more sales. But let’s take a closer look at how both A/B and multivariate testing can directly boost your social media conversions:

Learning your A/Bs

A/B testing often referred to as split testing, actively compares two versions of a web page, email or other facets of a business in a way that can actively measure their respective performance. 

This can be done by rendering one version to be observed by one group and another to a different cross-section of users. 

To help illustrate the key concept behind A/B testing, think of yourself as a website owner, and imagine that you have two landing page layouts that you can’t decide on. 

Through tools like Unbounce and Optimizely, A/B testing allows you to test one page by showcasing it to one group while sending another to a different group and studying the results. It’s possible to study the performance of each landing page by consulting metrics pertaining to traffic, conversions or purchase intent. 

Generally, these metrics will show one landing page to perform better than its competitor and you’ll gain a clear idea of which layout would be most effective online. 

Tapping into multivariate testing

Multivariate testing structure

Source: Leadpages

While multivariate testing can certainly complement A/B testing, these two practices are not fundamentally the same. 

Where A/B testing helps both website owners and marketers to see which design is more popular within different control groups, multivariate testing allows users to test various campaign elements all at the same time. 

This means that various different combinations of images, multimedia, text, call-to-actions can all be displayed for different users. While some may be greeted with a large image and a sign-up prompt, others may see an introductory video when they arrive on the same website, for instance. 

Given the wide range of element-based combinations that your campaign could feature, multivariate testing is regarded as one of the most efficient ways to gain insights into the impact that your marketing campaign could have once it’s correctly optimized. 

Finteza can be a handy tool when it comes to analyzing the performance of A/B tests, enabling you to see which page performs better from a sales funnel point of view.  

Graph on A/B and multivariate testing

Advances in marketing technology have catapulted the capabilities of multivariate testing tools into the limelight. With the right software, it’s possible to conduct tests in real-time with the same audience – providing a true sample to draw results from. This means that various combinations of elements can be sent to audiences at different times, with analytical software on hand to interpret the results and figure out which blend of elements operate most efficiently.

The value of A/B testing on social media 

A/B and multivariate testing example

Source: MeetEdgar

One of the most effective uses of A/B testing can be found on social media. Campaigns can be optimized for various audiences by sending different messages to different control groups. Where 50% of the control group is shown message A, the other 50% is shown message B. The winning version is determined by which received the highest volume of clicks or impressions. The most effective option is then broadcast to all audiences from there on in. 

Social A/B testing helps to enlighten marketers as to which message is most engaging to audiences, and what type of content creates more meaningful engagements. This form of testing can also help to provide insights into what time of day a message will be most likely to hit home and which call-to-actions are best positioned to generate conversions. 

A/B testing is best summarized by WIRED writer Brian Christian, who explains that A/B helps to generate high-quality focus-groups that can test new ideas in real-time, without any prior conditioning. “Without being told, a fraction of users are diverted to a slightly different version of a given web page and their behavior compared against the mass of users on the standard site. If the new version proves superior—gaining more clicks, longer visits, more purchases—it will displace the original,” Christian surmised. 

The marketing landscape is ever-changing, and very few marketers can seriously claim to have the vision and anticipation required to stay ahead of trends. With this in mind, A/B testing in real-time is essential to gain insights into your target audiences.  

While it can seem like a complex approach to conversion optimization, there are plenty of advanced tools that can aid A/B testing methods. Notably tools like Evergage can help to optimize headlines for linked articles on different social media platforms. Content is an essential part of the process of generating new leads, but A/B testing helps marketers to figure out how best to deliver content on different platforms – leading to greater levels of traffic and subsequent conversions.

Multivariate optimization

While A/B testing can work wonders in helping marketers to decide between two different campaign ideas using quantifiable metrics, multivariate testing can deliver more comprehensive and exponential insights. 

Of course, social media is a rigid place for the testing of different messages and campaigns, but multivariate testing tools have the power to deliver fully customized website experiences for traffic arriving from various social sources. This can be a particularly effective way of catering to the different demographics of social networks – from the relative maturity of the microblogging platform, Twitter, to the more vivacious and vibrant youthfulness of Snapchat and TikTok. 

A/B and multivariate testing stat

Source: Online Sales Guide Tips

Given the vast array of elements that can be altered during multivariate testing, it’s important to turn to a tool that can make the whole process of tweaking landing pages and content as simple as possible. 

VWO is an effective platform in undertaking not only multivariate tests but also A/B and split URL testing. With the help of visual editors, marketers and website owners alike can change elements on the pages they wish to test and deploy different landing pages for visitors arriving from different places across the web. Furthermore, the tool helps users to study metrics based on how long a visitor spends on pages, how far they scroll, their exit intent and a host of other custom triggers

The marketing landscape is ever-developing due to the arrival of more intricate and engaging technology. While A/B testing has existed in the world of marketing for some time, multivariate approaches can bring unprecedented levels of optimization and insight into the performance of different ideas and concepts. 

For better or worse, the world is in love with social media. But different platforms have evolved to be favored by different user bases. Advanced tools and testing methods can now provide brands with the agility to take on competitors on different social fronts by crafting heavily tested, personalized experiences depending on where their traffic is coming from. The World Wide Web is developing into an increasingly competitive place – advanced testing helps to give conscientious marketers a fighting chance.

Peter Jobes is the Content Marketing Manager at Solvid, a digital marketing agency that specializes in SEO, paid advertising, and website designing.

The post How A/B and multivariate testing can skyrocket your social media conversions appeared first on Search Engine Watch.

Search Engine Watch


A Comprehensive Guide to Ad Testing

May 21, 2020 No Comments

Times have changed and so has the sophistication of ad testing. Here’s a list of things to take into consideration for your next ad test.

Read more at PPCHero.com
PPC Hero


Powered by WP Robot