Best Business Search

Monthly Archives: September 2020

Coralogix lands $25M Series B to rethink log analysis and monitoring

September 30, 2020 No Comments

Logging and monitoring tends to be an expensive endeavor because of the sheer amount of data involved. Companies are therefore forced to pick and choose what they monitor, limiting what they can see. Coralogix wants to change that by offering a more flexible pricing model, and today the company announced a $ 25 million Series B and a new real time analytics solution called Streama.

First the funding. The round was led by Red Dot Capital Partners and O.G. Tech Ventures with help from existing investors Aleph VC, StageOne Ventures, Janvest Capital Partners and 2B Angels. Today’s round, which comes after the startup’s $ 10 million Series A last November, brings the total to $ 41.2 million raised, according to the company.

When we spoke to Coralogix CEO and co-founder Ariel Assaraf last year regarding the A round, he described his company as more of an intelligent applications performance monitoring with some security logging analytics.

Today, the company announced Streama, which has been in Alpha since July. Assaraf says companies can pick and choose how they monitor and pay only for the features they use. That means if a particular log is only tangentially important, a customer can set it to low priority and save money, and direct the budget toward more important targets.

As the pandemic has taken hold, he says that companies are appreciating the ability to save money on their monitoring costs, and directing those resources elsewhere in the company. “We’re basically building out this full platform that is going to be inside centric and value centric instead of volume or machine count centric in its pricing model,” Assaraf said.

Assaraf differentiates his company from others out there like Splunk, Datadog and Sumo Logic saying his is a more modern approach to the problem that simplifies the operations. “All these complicated engineering things are being abstracted away in a simple way, so that any user can very quickly create savings and demonstrate that it’s [no longer] an engineering problem, it’s more of a business value question,” he explained.

Since the A round, the company has grown from 25 to 60 people spread out between Israel and the U.S. It plans to grow to 120 people in the next year with the new funding. When it comes to diversity in hiring, he says Israel is fairly homogeneous, so it involves gender parity there, something that he says he is working to achieve. The U.S. is still relatively small with just 12 employees now, but it will be expanding in the next year and it’s something he says that he will need to be thinking about that as he hires.

As part of that hiring spree, he wants to kick his sales and marketing operations into higher gear and start spending more on those areas as the company grows.


Enterprise – TechCrunch


How Twilio built its own conference platform

September 30, 2020 No Comments

Twilio’s annual customer conference was supposed to happen in May, but like everyone else who had live events scheduled for this year, it ran smack-dab into COVID-19 and was forced to cancel. That left the company wondering how to reimagine the event online. It began an RFP process to find a vendor to help, but eventually concluded it could use its own APIs and built a platform on its own.

That’s a pretty bold move, but one of the key issues facing Twilio was how to recreate the in-person experience of the show floor where people could chat with specific API experts. After much internal deliberation, they realized that was what their communication API products were designed to do.

Once they committed to going their own way, they began a long process that involved figuring out must-have features, building consensus in the company, creating a development and testing cycle and finding third-party partnerships to help them when they ran into the limitations of their own products.

All that work culminates this week when Twilio holds its annual Signal Conference online Wednesday and Thursday. We spoke to In-Young Chang, director of experience at Twilio, to learn how this project came together.

Chang said once the decision was made to go virtual, the biggest issue for them (and for anyone putting on a virtual conference) was how to recreate that human connection that is a natural part of the in-person conference experience.

The company’s first step was to put out a request for proposals with event software vendors. She said that the problem was that these platforms hadn’t been designed for the most part to be fully virtual. At best, they had a hybrid approach, where some people attended virtually, but most were there in person.

“We met with a lot of different vendors, vendors that a lot of big tech companies were using, but there were pros to some of them, and then cons to others, and none of them truly fit everything that we needed, which was connecting our customers to product experts [like we do at our in-person conferences],” Chang told TechCrunch.

Even though they had winnowed the proposals down to a manageable few, they weren’t truly satisfied with what the event software vendors were offering, and they came to a realization.

“Either we find a vendor who can do this fully custom in three months’ time, or [we do it ourselves]. This is what we do. This is in our DNA, so we can make this happen. The hard part became how do you prioritize because once we made the conference fully software-based, the possibilities were endless,” she said.

All of this happened pretty quickly. The team interviewed the vendors in May, and by June made the decision to build it themselves. They began the process of designing the event software they would be using, taking advantage of their own communications capabilities, first and foremost.

The first thing they needed to do was meet with various stakeholders inside the company and figure out the must-have features in their custom platform. She said that reeling in people’s ambitions for version 1.0 of the platform was part of the challenge that they faced trying to pull this together.

“We only had three months. It wasn’t going to be totally perfect. There had to be some prioritization and compromises, but with our APIs we [felt that we] could totally make this happen,” Chang said.

They started meeting with different groups across the company to find out their must-haves. They knew that they wanted to recreate this personal contact experience. Other needs included typical conference activities like being able to collect leads and build agendas and the kinds of things you would expect to do at any conference, whether in-person or virtual.

As the team met with the various constituencies across the company, they began to get a sense of what they needed to build and they created a priorities document, which they reviewed with the Signal leadership team. “There were some hard conversations and some debates, but everyone really had goodwill toward each other knowing that we only had a few months,” she said.

Signal Concierge Agent for virtual Twilio Signal Conference

Signal Concierge Agent helps attendees navigate the online conference. Image Credits: Twilio

The team believed it could build a platform that met the company’s needs, but with only 10 developers working on it, they had a huge challenge to get it done in three months.

With one of the major priorities putting customers together with the right Twilio personnel, they decided to put their customer service platform, Twilio Flex, to work on the problem. Flex combines voice, messaging, video and chat in one interface. While the conference wasn’t a pure customer service issue, they believed that they could leverage the platform to direct requests to people with the right expertise and recreate the experience of walking up to the booth and asking questions of a Twilio employee with a particular skill set.

“Twilio Flex has Taskrouter, which allows us to assign agents unique skills-based characteristics, like you’re a video expert, so I’m going to tag you as a video expert. If anyone has a question around video, I know that we can route it directly to you,” Chang explained.

They also built a bot companion, called Signal Concierge, that moves through the online experience with each attendee and helps them find what they need, applying their customer service approach to the conference experience.

“Signal Concierge is your conference companion, so that if you ever have a question about what session you should go to next or [you want to talk to an expert], there’s just one place that you have to go to get an answer to your question, and we’ll be there to help you with it,” she said.

The company couldn’t do everything with Twilio’s tools, so it turned to third parties in those cases. “We continued our partnership with Klik, a conference data and badging platform all available via API. And Perficient, a Twilio SI partner we hired to augment the internal team to more quickly implement the custom Twilio Flex experience in the tight time frame we had. And Plexus, who provided streaming capabilities that we could use in an open-source video player,” she said.

They spent September testing what they built, making sure the Signal Concierge was routing requests correctly and all the moving parts were working. They open the virtual doors on Wednesday morning and get to see how well they pulled it off.

Chang says she is proud of what her team pulled off, but recognizes this is a first pass and future versions will have additional features that they didn’t have time to build.

“This is V1 of the platform. It’s not by any means exactly what we want, but we’re really proud of what we were able to accomplish from scoping the content to actually building the platform within three months’ time,” she said.


Enterprise – TechCrunch


Facebook Ads Holiday Checklist [2020]

September 29, 2020 No Comments

Between online shopping highs, COVID, and elections, Facebook advertisers must prepare for Q4 like never before. One Sr. Social Strategist shares his top tips.

Read more at PPCHero.com
PPC Hero


Navigating a cookieless future

September 29, 2020 No Comments

30-second summary:

  • On September 16, Apple launched iOS 14, which is a major overhaul of the Apple operating system and would require users to authorize information known as (IDFA).
  • This was followed by announcements from Google that they will be following a similar path for Google Chrome, effectively turning off tracking on Safari, which commands 90 percent usage on iPhones, and Chrome, which commands five percent.
  • These moves towards user privacy and marketing compliance are effectively a pivot away from the traditional advertising and search marketing industry, which will impact later players like Facebook to national media agencies like GroupM.
  • More details on how marketers can navigate in a cookieless world.

One of the most impactful changes to internet advertising and media has stayed mostly unspoken in agency and SEO chatter. However, like the switch from a desktop landscape to a mobile landscape, there is no reprieve from the coming cookieless world.

On September 16, Apple launched iOS 14, which is a major overhaul of the Apple operating system and would require users to authorize information known as (IDFA). IDFA is used to track user behavior for advertising.

This was followed by announcements from Google that they will be following a similar path for Google Chrome, effectively turning off tracking on Safari, which commands 90 percent usage on iPhones, and Chrome, which commands five percent.

These moves towards user privacy and marketing compliance are effectively a pivot away from the traditional advertising and search marketing industry, which will impact later players like Facebook to national media agencies like GroupM.

Content created in partnership with SherloQ™, Inc.

National TV advertisers and PPC advertisers are not waiting around

Once again led by the advertising and search category of injury law, due to the highly competitive and expensive nature, we are seeing a couple of key movers.

Smith & Hassler, a nationally recognized personal injury law firm that famously uses Judge Alex Ferrer and William Shatner as TV spokespeople, and Mike Slocumb Law, a firm that is known for its use of celebrity spokespeople and a sometimes outrageous style. They are both first use cases of using Natural Language Understanding (NLU) for content and first-party data extraction, and automatic AI, which assists in marketing automation to Google without the use of cookies.

In both cases, the companies are working with and have implemented  SherloQ™, powered by IBM Watson, to implement cookieless changes and compliance to market their websites.

A recent story from AdWeek quoted Andrew Casale about a cookieless future for publishers, who said it best,

“Publishers haven’t seen a recovery in their CPMs, and similar to Root, believes the focus of online media trading will be publishers’ first-party data as such a method of audience targeting will mean less personal information is traded between (comparatively) anonymous ad-tech players.”

The rapid move towards using first-party data and AI automation will not be limited to a single industry. Privacy is a big selling feature and while Apple has allowed an extension to IDFA, mostly due to the time needed for developers to employ these new frameworks, Apple and Google are not going to wait for the advertising industry’s input.

If your agency or enterprise wants to learn more about how SherloQ™, powered by IBM Watson, can help navigate a cookieless world, please download our white paper to learn more about our framework.

The post Navigating a cookieless future appeared first on Search Engine Watch.

Search Engine Watch


Featured Snippet Answer Scores Ranking Signals

September 29, 2020 No Comments

Calculating Featured Snippet Answer Scores

An update this week to a patent tells us how Google may score featured snippet answers.

When a search engine ranks search results in response to a query, it may use a combination of query dependant and query independent ranking signals to determine those rankings.

A query dependant signal may depend on a term in a query, and how relevant a search result may be for that query term. A query independent signal would depend on something other than the terms in a query, such as the quality and quantity of links pointing to a result.

Answers to questions in queries may be ranked based on a combination of query dependant and query independent signals, which could determine a featured snippet answer score. An updated patent about textual answer passages tells us about how those may be combined to generate featured snippet answer scores to choose from answers to questions that appear in queries.

A year and a half ago, I wrote about answers to featured snippets in the post Does Google Use Schema to Write Answer Passages for Featured Snippets?. The patent that the post was about was Candidate answer passages, which was originally filed on August 12, 2015, and was granted as a continuation patent on January 15, 2019.

That patent was a continuation patent to an original one about answer passages that updated it by telling us that Google would look for textual answers to questions that had structured data near them that included related facts. This could have been something like a data table or possibly even schema markup. This meant that Google could provide a text-based answer to a question and include many related facts for that answer.

Another continuation version of the first version of the patent was just granted this week. It provides more information and a different approach to ranking answers for featured snippets and it is worth comparing the claims in these two versions of the patent to see how those are different from Google.

The new version of the featured snippet answer scores patent is at:

Scoring candidate answer passages
Inventors: Steven D. Baker, Srinivasan Venkatachary, Robert Andrew Brennan, Per Bjornsson, Yi Liu, Hadar Shemtov, Massimiliano Ciaramita, and Ioannis Tsochantaridis
Assignee: Google LLC
US Patent: 10,783,156
Granted: September 22, 2020
Filed: February 22, 2018

Abstract

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scoring candidate answer passages. In one aspect, a method includes receiving a query determined to be a question query that seeks an answer response and data identifying resources determined to be responsive to the query; for a subset of the resources: receiving candidate answer passages; determining, for each candidate answer passage, a query term match score that is a measure of similarity of the query terms to the candidate answer passage; determining, for each candidate answer passage, an answer term match score that is a measure of similarity of answer terms to the candidate answer passage; determining, for each candidate answer passage, a query dependent score based on the query term match score and the answer term match score; and generating an answer score that is a based on the query dependent score.

featured snippet answer scores

Candidate Answer Passages Claims Updated

There are changes to the patent that require more analysis of potential answers, based on both query dependant and query independent scores for potential answers to questions. The patent description does provide details about query dependant and query independent scores. The first claim from the first patent covers query dependant scores for answers, but not query independent scores as the newest version does. It provides more details about both query dependant scores and query independent scores in the rest of the claims, but the newer version seems to make both query dependant and query independent scores more important.

The first claim from the 2015 version of the Scoring Answer Passages patent tells us:

1. A method performed by data processing apparatus, the method comprising: receiving a query determined to be a question query that seeks an answer response and data identifying resources determined to be responsive to the query and ordered according to a ranking, the query having query terms; for each resource in a top-ranked subset of the resources: receiving candidate answer passages, each candidate answer passage selected from passage units from content of the resource and being eligible to be provided as an answer passage with search results that identify the resources determined to be responsive to the query and being separate and distinct from the search results; determining, for each candidate answer passage, a query term match score that is a measure of similarity of the query terms to the candidate answer passage; determining, for each candidate answer passage, an answer term match score that is a measure of similarity of answer terms to the candidate answer passage; determining, for each candidate answer passage, a query dependent score based on the query term match score and the answer term match score; and generating an answer score that is a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score.

The remainder of the claims tell us about both query dependant and query independent scores for answers, but the claims from the newer version of the patent appear to place as much importance on the query dependant and the query independent scores for answers. That convinced me that I should revisit this patent in a post and describe how Google may calculate answer scores based on query dependant and query independent scores.

The first claims in the new patent tell us:

1. A method performed by data processing apparatus, the method comprising: receiving a query determined to be a question query that seeks an answer response and data identifying resources determined to be responsive to the query and ordered according to a ranking, the query having query terms; for each resource in a top-ranked subset of the resources: receiving candidate answer passages, each candidate answer passage selected from passage units from content of the resource and being eligible to be provided as an answer passage with search results that identify the resources determined to be responsive to the query and being separate and distinct from the search results; determining, for each candidate answer passage, a query dependent score that is proportional to a number of instances of matches of query terms to terms of the candidate answer passage; determining, for each candidate answer passage, a query independent score for the candidate answer passage, wherein the query independent score is independent of the query and query dependent score and based on features of the candidate answer passage; and generating an answer score that is a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score and the query independent score.

As it says in this new claim, the answer score has gone from being “a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score” (from the first patent) to “a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score and the query independent score” (from this newer version of the patent.)

This drawing is from both versions of the patent, but it shows the query dependant and query independent scores both playing an important role in calculating featured snippet answer scores:

query dependent & query independent answers combine

Query Dependant and Query Independent Scores for Featured Snippet Answer Scores

Both versions of the patent tell us about how a query dependant score and a query independent score for an answer might be calculated. The first version of the patent only told us in its claims that an answer score used the query dependant score, and this newer version tells us that both the query dependant and the query independent scores are combined to calculate an answer score (to decide which answer is the best choice of an answer for a query.)

Before the patent discusses how Query Dependant and Query Independent signals might be used to create an answer score, it does tell us this about the answer score:

The answer passage scorer receives candidate answer passages from the answer passage generator and scores each passage by combining scoring signals that predict how likely the passage is to answer the question.

In some implementations, the answer passage scorer includes a query dependent scorer and a query independent scorer that respectively generate a query dependent score and a query independent score. In some implementations, the query dependent scorer generates the query dependent score based on an answer term match score and a query term match score.

Query Dependant Scoring for Featured Snippet Answer Scores

Query Dependent Scoring of answer passages is based on answer term features.

An answer term match score is a measure of similarity of answer terms to terms in a candidate answer passage.

The answer-seeking queries do not describe what a searcher is looking for since the answer is unknown to the searcher at the time of a search.

The query dependent scorer begins by finding a set of likely answer terms and compares the set of likely answer terms to a candidate answer passage to generate an answer term match score. The set of likely answer terms is likely taken from the top N ranked results returned for a query.

The process creates a list of terms from terms that are included in the top-ranked subset of results for a query. The patent tells us that each result is parsed and each term is included in a term vector. Stop words may be omitted from the term vector.

For each term in the list of terms, a term weight may be generated for the term. The term weight for each term may be based on many results in the top-ranked subset of results in which the term occurs multiplied by an inverse document frequency (IDF) value for the term. The IDF value may be derived from a large corpus of documents and provided to the query dependent scorer. Or the IDF may be derived from the top N documents in the returned results. The patent tells us that other appropriate term weighting techniques can also be used.

The scoring process for each term of the candidate answer passage determines several times the term occurs in the candidate answer passage. So, if the term “apogee” occurs two times in a candidate answer passage, the term value for “apogee” for that candidate answer passage is 2. However, if the same term occurs three times in a different candidate answer passage, then the term value for “apogee” for the different candidate answer passage is 3.

The scoring process, for each term of the candidate answer passage, multiplies its term weight by the number of times the term occurs in the answer passage. So, assume the term weight for “apogee” is 0.04. For the first candidate answer passage, the value based on “apogee” is 0.08 (0.08.times.2); for the second candidate answer passage, the value based on “apogee” is 0.12 (0.04.times.3).

Other answer term features can also be used to determine an answer term score. For example, the query dependent scorer may determine an entity type for an answer response to the question query. The entity type may be determined by identifying terms that identify entities, such as persons, places, or things, and selecting the terms with the highest term scores. The entity time may also be identified from the query (e.g., for the query [who is the fastest man]), the entity type for an answer is “man.” For each candidate answer passage, the query dependent scorer then identifies entities described in the candidate answer passage. If the entities do not include a match to the identified entity type, the answer term match score for the candidate answer passage is reduced.

Assume the following candidate passage answer is provided for scoring in response to the query [who is the fastest man]: Olympic sprinters have often set world records for sprinting events during the Olympics. The most popular sprinting event is the 100-meter dash.

The query dependent scorer will identify several entities–Olympics, sprinters, etc.–but none of them are of the type “man.” The term “sprinter” is gender-neutral. Accordingly, the answer term score will be reduced. The score may be a binary score, e.g., 1 for the presence of the term of the entity type, and 0 for an absence of the term of the correct type; alternatively may be a likelihood that is a measure of the likelihood that the correct term is in the candidate answer passage. An appropriate scoring technique can be used to generate the score.

Query Independant Scoring for Featured Snippet Answer Scores

Scoring answer passages according to query independent features.

Candidate answer passages may be generated from the top N ranked resources identified for a search in response to a query. N may be the same number as the number of search results returned on the first page of search results.

The scoring process can use a passage unit position score. This passage unit position could be the location of a result that a candidate answer passage comes from. The higher the location results in a higher score.

The scoring process may use a language model score. The language model score generates a score based on candidate answer passages conforming to a language model.

One type of language model is based on sentence and grammar structures. This could mean that candidate answer passages with partial sentences may have lower scores than candidate answer passages with complete sentences. The patent also tells us that if structured content is included in the candidate answer passage, the structured content is not subject to language model scoring. For instance, a row from a table may have a very low language model score but may be very informative.

Another language model that may be used considers whether text from a candidate answer passage appears similar to answer text in general.

A query independent scorer accesses a language model of historical answer passages, where the historical answer passages are answer passages that have been served for all queries. Answer passages that have been served generally have a similar n-gram structure, since answer passages tend to include explanatory and declarative statements. A query independent score could use a tri-gram model to compares trigrams of the candidate answer passage to the tri-grams of the historical answer passages. A higher-quality candidate answer passage will typically have more tri-gram matches to the historical answer passages than a lower quality candidate answer passage.

Another step involves a section boundary score. A candidate answer passage could be penalized if it includes text that passes formatting boundaries, such as paragraphs and section breaks, for example.

The scoring process determines an interrogative score. The query independent scorer searches the candidate answer passage for interrogative terms. A potential answer passage that includes a question or question term, e.g., “How far is away is the moon from the Earth?” is generally not as helpful to a searcher looking for an answer as a candidate answer passage that only includes declarative statements, e.g., “The moon is approximately 238,900 miles from the Earth.”

The scoring process also determines discourse boundary term position scores. A discourse boundary term is one that introduces a statement or idea contrary to or modification of a statement or idea that has just been made. For example, “conversely,” “however,” “on the other hand,” and so on.

A candidate answer passage beginning with such a term receives a relatively low discourse boundary term position score, which lowers the answer score.

A candidate answer passage that includes but does not begin with such a term receives a higher discourse boundary term position score than it would if it began with the term.

A candidate answer passage that does not include such a term receives a high discourse boundary term position score.

The scoring process determines result scores for results from which the candidate answer passage was created. These could include a ranking score, a reputation score, and site quality score. The higher these scores are, the higher the answer score will be.

A ranking score is based on the ranking score of the result from which the candidate answer passage was created. It can be the search score of the result for the query and will be applied to all candidate answer passages from that result.

A reputation score of the result indicates the trustworthiness and/or likelihood that that subject matter of the resource serves the query well.

A site quality score indicates a measure of the quality of a web site that hosts the result from which the candidate answer passage was created.

Component query independent scores described above may be combined in several ways to determine the query independent score. They could be summed; multiplied together; or combined in other ways.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Featured Snippet Answer Scores Ranking Signals appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


During Covid, Eating Disorder Patients Turn to Apps

September 29, 2020 No Comments

Anorexia, bulimia, and binge eating patients are facing novel challenges as in-person care is on hold. Can tech tools fill in the treatment gaps?
Feed: All Latest


Adobe beefs up developer tools to make it easer to build apps on Experience Cloud

September 29, 2020 No Comments

Adobe has had a developer program for years called Adobe.io, but today at the Adobe Developers Live virtual conference, the company announced some new tools with a fresh emphasis on helping developers build custom apps on the Adobe Experience Cloud.

Jason Woosley, VP of developer experience and commerce at Adobe, says that the pandemic has forced companies to build enhanced digital experiences much more quickly than they might have, and the new tools being announced today are at least partly related to helping speed up the development of better online experiences.

“Our focus is very specifically on making the experience-generation business something that’s very attractive to developers and very accessible to developers so we’re announcing a number of tools,” Woosley told TechCrunch.

The idea is to build a more complete framework over time to make it easier to build applications and connect to data sources that take advantage of the Experience Cloud tooling. For starters, Project Firefly is designed to help developers build applications more quickly by providing a higher level of automation than was previously available.

“Project Firefly creates an extensibility framework that reduces the boilerplate that a developer would need to get started working with the Experience Cloud, and extends that into the customizations that we know every implementation eventually needs to differentiate the storefront experience, the website experience or whatever customer touch point as these things become increasingly digital,” he said.

In order to make those new experiences open to all, the company is also announcing React Spectrum, an open source set of libraries and tools designed to help members of the Adobe developer community build more accessible applications and websites.

“It comes with all of the accessibility features that often get forgotten when you’re in a race to market, so it’s nice to make sure that you will be very inclusive with your design, making sure that you’re bringing on all aspects of your audiences,” Woosley said.

Finally, a big part of interacting with Experience Cloud is taking advantage of all of the data that’s available to help build those more customized interactions with customers that having that data enables. To that end, the company is announcing some new web and mobile software development kits (SDKs) designed to help make it simpler to link to Experience Cloud data sources as you build your applications.

Project Firefly is available in developer preview starting today. Several React Spectrum components and some data connection SDKs are also available today. The company intends to keep adding to these various pieces in the coming months.


Enterprise – TechCrunch


Synergized search is key to success in the new normal

September 28, 2020 No Comments

30-second summary:

  • Given that consumers run billions of searches every day — with Google estimated to process 40,000 per second it’s clear marketers need a smart strategy to cut through the competition.
  • The question is: Will they drive the highest traffic and performance with SEO or PPC?
  • Head of Paid Media at Tug shares insight on how perfectly balancing these two facets can lead to success in the new normal.

Consumer activity online is at an all-time high. So, it’s no surprise many marketers are aiming to make the most of it by hooking their attention early, at the point of search. But deciding how best to do so isn’t necessarily easy.

Given that consumers run billions of searches every day — with Google estimated to process 40,000 per second — it’s clear marketers need a smart strategy to cut through the competition. The question is: will they drive the highest traffic and performance with search engine optimization (SEO) or pay per click (PPC)?

Both have their own advantages and drawbacks. PPC is a quick win, enabling businesses to rapidly reach consumers and boost visibility. But its lead generation power only lasts while the money flows and, depending on campaign scale and scope, those costs can run high. Meanwhile, SEO delivers more lasting rewards and higher click-through rates (CTRs), often for less investment. Yet marketers might have a long wait before organic searches pay off, and may still fall behind dominant digital marketplaces for certain keywords.

Ultimately, the smartest route lies neither one way nor the other, but in a combination of both. Blending PPC and SEO not only generates stronger results but also balances out their respective shortcomings, offering marketers the best chance of success in the new ever-changing normal.

Utilizing a combination of paid and organic search tactics isn’t new – but it’s never been clear how marketers can best do this, or a way to visualize the data for optimization. Leveraging PPC and SEO in conjunction with one another can be challenging, but creating the perfect synergy is possible if marketers focus on the following three factors:

Unify search operations

With consumers spending a quarter of their waking day online, marketers have plenty of chances to spark their interest through search. To outmanoeuvre rivals and capture eyeballs first, brands must make fast yet informed decisions about which approach will produce the ideal outcome.

Achieving this requires holistic insight which, in turn, calls for greater unity. Due to the general view of PPC and SEO as separate entities, teams often operate in silos, but this isolates valuable knowledge around consumer behaviour and the tactics that generate the biggest rewards. Simple as it sounds, removing divisions and encouraging teams to share their insight can significantly improve campaign execution and drive more efficient CPAs.

For example, information from the PPC teams on the best performing keywords and ad copy will help SEO teams to optimize meta descriptions and website content.

Sharing information on what keywords campaigns are covering will also prevent the doubling up of efforts – for example, as organic keyword positions improve, there might be an opportunity to pull back PPC activity and reallocate budget to other keywords to increase the overall coverage. Similarly, updates from the SEO team on keywords that are particularly competitive to rank in top positions are an opportunity for PPC to drive incremental conversions. And, on a more fundamental level, by sharing any new or emerging search terms with each other, both SEO and PPC teams can ensure they are up-to-date and reacting as quickly as possible to opportunities.

Select tech that drives collaboration

The next step is integrated technology. Implementing tools that collate and merge data from multiple sources — including PPC and SEO campaigns — will make collaboration easier. That’s not to mention generating a complete overview of collective search operations, performance, and opportunities for businesses.

A holistic and unified dashboard, for example, can provide visibility of combined search performance against KPIs and competitor activity. This enables PPC and SEO teams to identify where there are opportunities and how strategies can be adjusted to leverage them, without duplicating each other’s efforts. Marketers can understand where organic rankings are high, and competitor activity low, and vice versa, which means they know when to reduce PPC activity, as well as opportunities where it can drive incremental conversions over and above what SEO can deliver.

All of this, however, depends on accuracy and usability. Information needs to be reliable and actionable, which means simply joining up the data dots isn’t enough: in addition to robust cleansing, processing and storage, tools must offer accessible visualization.

Although frequently overlooked, clearly-presented data plays a huge part in enhancing everyday activity. Providing a streamlined picture of keywords and performance data is vital, but to ensure teams can pinpoint prime SERPs, accelerate traffic, and increase conversions, businesses also need tools that allow their teams to quickly find and activate key insights.

Don’t forget human checks

Dialing up tech use, however, does come with a word of warning – no matter how smart platforms may be, they can’t entirely replace human experience and expertise. On their own, sophisticated tools bring a range of benefits that go far beyond translating data into a more cohesive and user-friendly format. The most advanced boast immediate alerts that tell PPC teams where their competitors are bidding — or not — and use artificially intelligent (AI) analysis to deliver a cross-market, sector, and classification perspective on SEO activity.

Human knowledge is still paramount to steering search campaigns in the right direction and picking up on the nuances that machines miss. For instance, problem-solving machines might take the quickest path to objective completion for certain pages or messages, but seasoned search professionals may see the potential for longer-term uses that deliver higher incremental value.

As a result, organizations must avoid the perils of over-reliance on their marketing tools. By persistently applying manual reviews and checking automated conclusions against human knowledge, they can tap the best of tech and people power.

Today’s marketing leaders are grappling with multiple uncertainties, but when it comes to search, the way forward is clear. PPC and SEO are complementary forces; producing deeper insights and higher returns together, as well as minimizing risk. By connecting the two and taking a considered approach to data-driven search strategy, businesses can ensure campaigns are strong enough to succeed in the new normal and take on whatever tomorrow brings.

Asher Gordon is Head of Paid Media at Tug. He leads a multi-disciplined media team who plan, buy, and deliver integrated media plans for a diverse set of clients. With over 10 years experience working across multiple markets and brands at PHD and Wavemaker, Asher works with clients to better their marketing goals and drive their business forward.

The post Synergized search is key to success in the new normal appeared first on Search Engine Watch.

Search Engine Watch


Announcing GROW Towards a Better Normal [Summit]

September 28, 2020 No Comments

Join us Thursday, Oct. 1 for a full-day Digital Solutions Summit focused on helping global digital marketers GROW Towards a Better Normal. The summit sessions will span a variety of topics including implementing growth tactics for SEO, driving growth through automation, and optimizing content to increase conversions. Can’t make it live or for the full […]

Read more at PPCHero.com
PPC Hero


Featured Snippet Answer Scores Ranking Signals

September 28, 2020 No Comments

Calculating Featured Snippet Answer Scores

An update this week to a patent tells us how Google may score featured snippet answers.

When a search engine ranks search results in response to a query, it may use a combination of query dependant and query independent ranking signals to determine those rankings.

A query dependant signal may depend on a term in a query, and how relevant a search result may be for that query term. A query independent signal would depend on something other than the terms in a query, such as the quality and quantity of links pointing to a result.

Answers to questions in queries may be ranked based on a combination of query dependant and query independent signals, which could determine a featured snippet answer score. An updated patent about textual answer passages tells us about how those may be combined to generate featured snippet answer scores to choose from answers to questions that appear in queries.

A year and a half ago, I wrote about answers to featured snippets in the post Does Google Use Schema to Write Answer Passages for Featured Snippets?. The patent that post was about was Candidate answer passages, which was originally filed on August 12, 2015, and was granted as a continuation patent on January 15, 2019.

That patent was a continuation patent to an original one about answer passages that updated it by telling us that Google would look for textual answers to questions that had structured data near them that included related facts. This could have been something like a data table or possibly even schema markup. This meant that Google could provide a text-based answer to a question and include many related facts for that answer.

Another continuation version of the first version of the patent was just granted this week. It provides more information and a different approach to ranking answers for featured snippets and it is worth comparing the claims in these two versions of the patent to see how those are different from Google.

The new version of the featured snippet answer scores patent is at:

Scoring candidate answer passages
Inventors: Steven D. Baker, Srinivasan Venkatachary, Robert Andrew Brennan, Per Bjornsson, Yi Liu, Hadar Shemtov, Massimiliano Ciaramita, and Ioannis Tsochantaridis
Assignee: Google LLC
US Patent: 10,783,156
Granted: September 22, 2020
Filed: February 22, 2018

Abstract

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scoring candidate answer passages. In one aspect, a method includes receiving a query determined to be a question query that seeks an answer response and data identifying resources determined to be responsive to the query; for a subset of the resources: receiving candidate answer passages; determining, for each candidate answer passage, a query term match score that is a measure of similarity of the query terms to the candidate answer passage; determining, for each candidate answer passage, an answer term match score that is a measure of similarity of answer terms to the candidate answer passage; determining, for each candidate answer passage, a query dependent score based on the query term match score and the answer term match score; and generating an answer score that is a based on the query dependent score.

featured snippet answer scores

Candidate Answer Passages Claims Updated

There are changes to the patent that require more analysis of potential answers, based on both query dependant and query independent scores for potential answers to questions. The patent description does provide details about query dependant and query independent scores. The first claim from the first patent covers query dependant scores for answers, but not query independent scores as the newest version does. It provides more details about both query dependant scores and query independent scores in the rest of the claims, but the newer version seems to make both query dependant and query independent scores more important.

The first claim from the 2015 version of the Scoring Answer Passages patent tells us:

1. A method performed by data processing apparatus, the method comprising: receiving a query determined to be a question query that seeks an answer response and data identifying resources determined to be responsive to the query and ordered according to a ranking, the query having query terms; for each resource in a top-ranked subset of the resources: receiving candidate answer passages, each candidate answer passage selected from passage units from content of the resource and being eligible to be provided as an answer passage with search results that identify the resources determined to be responsive to the query and being separate and distinct from the search results; determining, for each candidate answer passage, a query term match score that is a measure of similarity of the query terms to the candidate answer passage; determining, for each candidate answer passage, an answer term match score that is a measure of similarity of answer terms to the candidate answer passage; determining, for each candidate answer passage, a query dependent score based on the query term match score and the answer term match score; and generating an answer score that is a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score.

The remainder of the claims tell us about both query dependant and query independent scores for answers, but the claims from the newer version of the patent appear to place as much importance on the query dependant and the query independent scores for answers. That convinced me that I should revisit this patent in a post and describe how Google may calculate answer scores based on query dependant and query independent scores.

The first claims in the new patent tell us:

1. A method performed by data processing apparatus, the method comprising: receiving a query determined to be a question query that seeks an answer response and data identifying resources determined to be responsive to the query and ordered according to a ranking, the query having query terms; for each resource in a top-ranked subset of the resources: receiving candidate answer passages, each candidate answer passage selected from passage units from content of the resource and being eligible to be provided as an answer passage with search results that identify the resources determined to be responsive to the query and being separate and distinct from the search results; determining, for each candidate answer passage, a query dependent score that is proportional to a number of instances of matches of query terms to terms of the candidate answer passage; determining, for each candidate answer passage, a query independent score for the candidate answer passage, wherein the query independent score is independent of the query and query dependent score and based on features of the candidate answer passage; and generating an answer score that is a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score and the query independent score.

As it says in this new claim, the answer score has gone from being “a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score” (from the first patent) to “a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score and the query independent score” (from this newer version of the patent.)

This drawing is from both versions of the patent, but it shows the query dependant and query independent scores both playing an important role in calculating featured snippet answer scores:

query dependent & query independent answers combine

Query Dependant and Query Independent Scores for Featured Snippet Answer Scores

Both versions of the patent tell us about how a query dependant score and a query independent score for an answer might be calculated. The first version of the patent only told us in its claims that an answer score used the query dependant score, and this newer version tells us that both the query dependant and the query independent scores are combined to calculate an answer score (to decide which answer is the best choice of an answer for a query.)

Before the patent discusses how Query Dependant and Query Independent signals might be used to create an answer score, it does tell us this about the answer score:

The answer passage scorer receives candidate answer passages from the answer passage generator and scores each passage by combining scoring signals that predict how likely the passage is to answer the question.

In some implementations, the answer passage scorer includes a query dependent scorer and a query independent scorer that respectively generate a query dependent score and a query independent score. In some implementations, the query dependent scorer generates the query dependent score based on an answer term match score and a query term match score.

Query Dependant Scoring for Featured Snippet Answer Scores

Query Dependent Scoring of answer passages is based on answer term features.

An answer term match score is a measure of similarity of answer terms to terms in a candidate answer passage.

The answer-seeking queries do not describe what a searcher is looking for since the answer is unknown to the searcher at the time of a search.

The query dependent scorer begins by finding a set of likely answer terms and compares the set of likely answer terms to a candidate answer passage to generate an answer term match score. The set of likely answer terms is likely taken from the top N ranked results returned for a query.

The process creates a list of terms from terms that are included in the top-ranked subset of results for a query. The patent tells us that each result is parsed and each term is included in a term vector. Stop words may be omitted from the term vector.

For each term in the list of terms, a term weight may be generated for the term. The term weight for each term may be based on many results in the top-ranked subset of results in which the term occurs multiplied by an inverse document frequency (IDF) value for the term. The IDF value may be derived from a large corpus of documents and provided to the query dependent scorer. Or the IDF may be derived from the top N documents in the returned results. The patent tells us that other appropriate term weighting techniques can also be used.

The scoring process for each term of the candidate answer passage determines several times the term occurs in the candidate answer passage. So, if the term “apogee” occurs two times in a candidate answer passage, the term value for “apogee” for that candidate answer passage is 2. However, if the same term occurs three times in a different candidate answer passage, then the term value for “apogee” for the different candidate answer passage is 3.

The scoring process, for each term of the candidate answer passage, multiplies its term weight by the number of times the term occurs in the answer passage. So, assume the term weight for “apogee” is 0.04. For the first candidate answer passage, the value based on “apogee” is 0.08 (0.08.times.2); for the second candidate answer passage, the value based on “apogee” is 0.12 (0.04.times.3).

Other answer term features can also be used to determine an answer term score. For example, the query dependent scorer may determine an entity type for an answer response to the question query. The entity type may be determined by identifying terms that identify entities, such as persons, places, or things, and selecting the terms with the highest term scores. The entity time may also be identified from the query (e.g., for the query [who is the fastest man]), the entity type for an answer is “man.” For each candidate answer passage, the query dependent scorer then identifies entities described in the candidate answer passage. If the entities do not include a match to the identified entity type, the answer term match score for the candidate answer passage is reduced.

Assume the following candidate passage answer is provided for scoring in response to the query [who is the fastest man]: Olympic sprinters have often set world records for sprinting events during the Olympics. The most popular sprinting event is the 100-meter dash.

The query dependent scorer will identify several entities–Olympics, sprinters, etc.–but none of them are of the type “man.” The term “sprinter” is gender-neutral. Accordingly, the answer term score will be reduced. The score may be a binary score, e.g., 1 for the presence of the term of the entity type, and 0 for an absence of the term of the correct type; alternatively may be a likelihood that is a measure of the likelihood that the correct term is in the candidate answer passage. An appropriate scoring technique can be used to generate the score.

Query Independant Scoring for Featured Snippet Answer Scores

Scoring answer passages according to query independent features.

Candidate answer passages may be generated from the top N ranked resources identified for a search in response to a query. N may be the same number as the number of search results returned on the first page of search results.

The scoring process can use a passage unit position score. This passage unit position could be the location of a result that a candidate answer passage comes from. The higher the location results in a higher score.

The scoring process may use a language model score. The language model score generates a score based on candidate answer passages conforming to a language model.

One type of language model is based on sentence and grammar structures. This could mean that candidate answer passages with partial sentences may have lower scores than candidate answer passages with complete sentences. The patent also tells us that if structured content is included in the candidate answer passage, the structured content is not subject to language model scoring. For instance, a row from a table may have a very low language model score but may be very informative.

Another language model that may be used considers whether text from a candidate answer passage appears similar to answer text in general.

A query independent scorer accesses a language model of historical answer passages, where the historical answer passages are answer passages that have been served for all queries. Answer passages that have been served generally have a similar n-gram structure, since answer passages tend to include explanatory and declarative statements. A query independent score could use a tri-gram model to compares trigrams of the candidate answer passage to the tri-grams of the historical answer passages. A higher-quality candidate answer passage will typically have more tri-gram matches to the historical answer passages than a lower quality candidate answer passage.

Another step involves a section boundary score. A candidate answer passage could be penalized if it includes text that passes formatting boundaries, such as paragraphs and section breaks, for example.

The scoring process determines an interrogative score. The query independent scorer searches the candidate answer passage for interrogative terms. A potential answer passage that includes a question or question term, e.g., “How far is away is the moon from the Earth?” is generally not as helpful to a searcher looking for an answer as a candidate answer passage that only includes declarative statements, e.g., “The moon is approximately 238,900 miles from the Earth.”

The scoring process also determines discourse boundary term position scores. A discourse boundary term is one that introduces a statement or idea contrary to or modification of a statement or idea that has just been made. For example, “conversely,” “however,” “on the other hand,” and so on.

A candidate answer passage beginning with such a term receives a relatively low discourse boundary term position score, which lowers the answer score.

A candidate answer passage that includes but does not begin with such a term receives a higher discourse boundary term position score than it would if it began with the term.

A candidate answer passage that does not include such a term receives a high discourse boundary term position score.

The scoring process determines result scores for results from which the candidate answer passage was created. These could include a ranking score, a reputation score, and site quality score. The higher these scores are, the higher the answer score will be.

A ranking score is based on the ranking score of the result from which the candidate answer passage was created. It can be the search score of the result for the query and will be applied to all candidate answer passages from that result.

A reputation score of the result indicates the trustworthiness and/or likelihood that that subject matter of the resource serves the query well.

A site quality score indicates a measure of the quality of a web site that hosts the result from which the candidate answer passage was created.

Component query independent scores described above may be combined in several ways to determine the query independent score. They could be summed; multiplied together; or combined in other ways.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Featured Snippet Answer Scores Ranking Signals appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Powered by WP Robot