Best Business Search

Monthly Archives: October 2019

Facebook sues OnlineNIC for domain name fraud associated with malicious activity

October 31, 2019 No Comments

Facebook today announced it has filed suit in California against domain registrar OnlineNIC and its proxy service ID Shield for registering domain names that pretend to be associated with Facebook, like www-facebook-login.com or facebook-mails.com, for example. Facebook says these domains are intentionally designed to mislead and confuse end users, who believe they’re interacting with Facebook.

These fake domains are also often associated with malicious activity, like phishing.

While some who register such domains hope to eventually sell them back to Facebook at a marked-up price, earning a profit, others have worse intentions. And with the launch of Facebook’s own cryptocurrency, Libra, a number of new domain cybersquatters have emerged. Facebook was recently able to take down some of these, like facebooktoken.org and ico-facebook.org, one of which had already started collecting personal information from visitors by falsely touting a Facebook ICO.

Facebooks’ new lawsuit, however, focuses specifically on OnlineNIC, which Facebook says has a history of allowing cybersquatters to register domains with its privacy/proxy service, ID Shield. The suit alleges that the registered domains, like hackingfacebook.net, are being used for malicious activity, including “phishing and hosting websites that purported to sell hacking tools.”

The suit also references some 20 other domain names that are confusingly similar to Facebook and Instagram trademarks, it says.

Screen Shot 2019 10 31 at 1.27.38 PM

OnlineNIC has been sued before for allowing this sort of activity, including by Verizon, Yahoo, Microsoft and others. In the case of Verizon (disclosure: TechCrunch parent), OnlineNIC was found liable for registering more than 600 domain names similar to Verizon’s trademark, and the courts awarded $ 33.15 million in damages as a result, Facebook’s filing states.

Facebook is asking for a permanent injunction against OnlineNIC’s activity, as well as damages.

The company says it took this issue to the courts because OnlineNIC has not been responsive to its concerns. Facebook today proactively reports instances of abuse with domain name registrars and their privacy/proxy services, and often works with them to take down malicious domains. But the issue is widespread — there are tens of millions of domain names registered through these services today. Some of these businesses are not reputable, however. Some, like OnlineNIC, will not investigate or even respond to Facebook’s abuse reports.

The news of the lawsuit was previously reported by Cnet and other domain name news sources, based on courthouse filings.

Attorney David J. Steele, who previously won the $ 33 million judgement for Verizon, is representing Facebook in the case.

“By mentioning our apps and services in the domain names, OnlineNIC and ID Shield intended to make them appear legitimate and confuse people. This activity is known as cybersquatting and OnlineNIC has a history of this behavior,” writes Facebook, in an announcement. “This lawsuit is one more step in our ongoing efforts to protect people’s safety and privacy,” it says.

OnlineNIC has been asked for comment and we’ll update if it responds.


Social – TechCrunch


Why website security affects SEO rankings (and what you can do about it)

October 31, 2019 No Comments

A few years ago I started a website and to my delight, the SEO efforts I was making to grow it were yielding results. However, one day I checked my rankings, and got the shock of my life. It had fallen, and badly.

I was doing my SEO right and I felt that was enough, but I didn’t know there was more. I hadn’t paid attention to my website security, and I didn’t even know that it mattered when it comes to Google and its ranking factors. Also, there were other security concerns I wasn’t paying attention to. As far as I was concerned back then, it didn’t matter since I had good content.

Obviously I was wrong, and I now know that if you really want to rank higher and increasing your site’s search traffic, then you need to understand that there is more to it than just building links and churning out more content. Understanding Google’s algorithm and it’s ranking factors are crucial.

Currently, Google has over 200 ranking factors they consider when they want to determine where to rank a site. And as expected, one of them is about how protected your site is. According to them, website security is a top priority, and they make a lot of investments all geared towards enduring that all their services, including Gmail and Google Drive, use top-notch security and other privacy tools by default all in a bid to make the internet a safer place generally. 

Unfortunately, I was uninformed about these factors until my rankings started dropping. Below are four things you can do to protect your site.

Four steps to get started on website security

1. Get security plug-ins installed

On average, a typical small business website gets attacked 44 times each day, and software “bots” attack these sites more than 150 million times every week. And this is for both WordPress sites and even for non-WordPress websites. 

Malware security breaches can lead to hackers stealing your data, data loss, or it could even make you lose access to your website. And in some cases, it can deface your website and that will not just spoil your brand reputation, it will also affect your SEO rankings.

To prevent that from happening, enhance your website security with WordPress plugins. These plugins will not just block off the brute force and malware attacks, they will harden WordPress security for your site, thus addressing the security vulnerabilities for each platform and countering all other hack attempts that could pose a threat to your website.

2. Use very strong passwords

As much as it is very tempting to use a password you can easily remember, don’t. Surprisingly, the most common password for most people is still 123456. You can’t afford to take such risks. 

Make the effort to generate a secure password. The rule is to mix up letters, numbers, and special characters, and to make it long. And this is not just for you. Ensure that all those who have access to your website are held to the same high standard that you hold yourself.

3. Ensure your website is constantly updated

As much as using a content management system (CMS) comes with a lot of benefits, it also has attendant risks attached. According to this Sucuri report, the presence of vulnerabilities in CMS’s extensible components is the highest cause of website infections. This is because the codes used in these tools are easily accessible owing to the fact that they are usually created as open-source software programs. That means hackers can access them too.

To protect your website, make sure your plugins, CMS, and apps are all updated regularly. 

4. Install an SSL certificate

installing an SSL certificate for website security SEO rankings

Image source

If you pay attention, you will notice that some URLs begin with “https://” while others start with “http://”. You may have likely noticed that when you needed to make an online payment. The big question is what does the “s” mean and where did it come from?

To explain it in very simple terms, that extra “s” is a way of showing that the connection you have with that website is encrypted and secure. That means that any data you input on that website is safe. That little “s” represents a technology known as SSL.

But why is website security important for SEO ranking?

Following Google’s Chrome update in 2017, sites that have “FORMS” but have no SSL certificate are marked as insecure. The SSL certificate, “Secure Sockets Layer” is the technology that encrypts the link between a browser and a web server, protects the site from hackers, and also makes sure that all the data that gets passed between a browser and a web server remains private.

why is website security important for SEO rankings, example with http vs https

Image source

A normal website comes with a locked key in the URL bar, but sites without SSL certificates, on the other hand, have the tag “Not Secure”. This applies to any website that has any form.

According to research carried out by Hubspot, 82% of those that responded to a consumer survey stated that they would leave a website that is not secure. And since Google chrome already holds about 67% out of the whole market share, that is a lot of traffic to lose.

research, how many users would consider browsing on a website that is not secure

Image source

Technically, the major benefit of having Hypertext Transfer Protocol Secure (HTTPS) instead of Hypertext Transfer Protocol (HTTP) is that it gives users a more secure connection that they can use to share personal data with you. This adds an additional layer of security which becomes important especially if you are accepting any form of payment on your site.

To move from HTTP to HTTPS you have to get an SSL certificate (Secure Socket Layer certificate) installed on your website.

why does an SSL certificate work for website securityImage source 

Once you get your SSL certificate installed successfully on a web server and configured, Google Chrome will show a green light. It will then act as a padlock by providing a secure connection between the browser and the webserver. For you, what this means is that even if a hacker is able to intercept your data, it will be impossible for them to decrypt it.

Security may have a minor direct effect on your website ranking, but it affects your website in so many indirect ways. It may mean paying a little price, but in the end, the effort is worth it.

Segun Onibalusi is the Founder and CEO at SEO POW, an organic link building agency. He can be found on Twitter @iamsegun_oni.

The post Why website security affects SEO rankings (and what you can do about it) appeared first on Search Engine Watch.

Search Engine Watch


Pre-made calendar with over 300 holidays to help plan editorial content

October 31, 2019 No Comments

A carefully planned content marketing strategy contains several key ingredients including an understanding of who you’re creating content for (e.g., your persona or personas), how your content will help them, and some key performance indicators to measure success.

However, even the most thoughtful and well-planned content strategy can run into roadblocks without a detailed editorial plan. The editorial plan should include what categories and topics you plan to write about, how you intend to amplify your content (e.g., social media, email, etc.) and—the most important bit of all—a list of relevant, highly engaging ideas that incorporates a balance of evergreen and time-sensitive content.

Event-specific content can be challenging to create with any consistency, but with some planning and foresight, it is possible to plan out your editorial calendar in advance. One way to do this is to align some of your topics with seasonal holidays, observance days, and themes. 

holiday planner social media

[Image source]

A holiday for every week, month and season 

At CommonMind, we’ve compiled a holiday planner specifically aimed at social media content planning. It contains more than 300 holidays bucketed in three categories as follows:

  • 2019/2020 U.S. National Holidays: This calendar contains all the top favorites like Christmas, New Year’s Day and Tax Day (that last one is somebody’s favorite, I’m sure).
  • Educational Calendar/Events: This includes key dates such as Global Family Day and National Science Fiction Day which are observed globally. 
  • A Food-themed Calendar: Technically, these aren’t holidays, but they’re fun to observe and perfect for helping fill your editorial calendar, particularly if you are in the food and beverage industry (though this isn’t a requirement).

Since a long list of every conceivable holiday can seem a bit daunting to wrap your brain around, we’ve also created an embedded Google calendar that can be viewed in weekly or monthly increments or printed. 

holiday planner for social media example calendar

November 2019 Holiday Calendar – Source: CommonMind

Holiday planning isn’t just for retailers

When people think of the holiday season, it tends to mean the period of time between Thanksgiving and New Year’s (although it’s been creeping up in the calendar to incorporate Halloween as well). But holiday content planning isn’t just for retailers or companies whose business ebbs and flows depending on the season. Here are a few examples of how some lesser-known holidays and observed days can inspire great content.

World Vegan Day (November 2, 2019): This is relevant to a variety of businesses in the health and wellness industry. Here are a few examples:

  • A nutritionist could write a piece about how to create a nutrient-rich vegan diet.
  • A healthcare provider could create a list of physical signs for vegans to be aware of that indicate they’re not getting enough of a specific vitamin or mineral.
  • A fitness expert (or gym) could write about how to ensure vegans have enough energy for various types of workouts.

World Kindness Day (November 13, 2019)

  • A marketing agency could write about an ad campaign or case study which features kindness as the main theme.
  • A veterinary clinic could write about how kindness helps both pets and their owners live happy, more fulfilling lives.
  • Any  number of businesses can write about kindness as their approach to doing business such as through employee wellness and medical programs, community service and involvement, or promoting an internal culture of kindness.

National Hot Cocoa Day (December 13, 2019)

  • This is a cocoa-manufacturer’s dream holiday and the perfect day to promote their cocoa products with a blog post as well as via social media.
  • Food-related organizations (coffee shops, restaurants, caterers, grocery stores, etc.) could create an event around this day (e.g., drop in for a free cup of cocoa!) and promote it via their blog and social media accounts.
  • Retailers can cash in on the height of shopping season by offering free cocoa in stores, coupons that fall on this day, and stories that humanize the company which can be featured on the blog (e.g., feature an employee cocoa-related story).

As you can see, becoming familiar with nonstandard holidays as well as observance days can help spur creative ideas for content that’s relevant to a variety of businesses and industries (you don’t have to sell cocoa to take advantage of National Hot Cocoa Day).

Our Google Holiday Calendar is a great way to familiarize yourself with upcoming holidays and can be imported into your own calendar for easy reference. Since this may be overwhelming, you can also peruse the long list of holidays to begin brainstorming and filling out your editorial calendar for the rest of 2019 and into 2020.

Happy content planning!

Jacqueline Dooley is Director of Digital Strategy for CommonMind.

The post Pre-made calendar with over 300 holidays to help plan editorial content appeared first on Search Engine Watch.

Search Engine Watch


Samsung ramps up its B2B partner and developer efforts

October 31, 2019 No Comments

Chances are you mostly think of Samsung as a consumer-focused electronics company, but it actually has a very sizable B2B business as well, which serves more than 15,000 large enterprises and hundreds of thousands of SMB entrepreneurs via its partners. At its developer conference this week, it’s putting the spotlight squarely on this side of its business — with a related hardware launch as well. The focus of today’s news, however, is on Knox, Samsung’s mobile security platform, and Project AppStack, which will likely get a different name soon, and which provides B2B customers with a new mechanism to deliver SaaS tools and native apps to their employees’ devices, as well as new tools for developers that make these services more discoverable.

At least in the U.S., Samsung hasn’t really marketed its B2B business all that much. With this event, the company is clearly thinking to change that.

At its core, Samsung is, of course, a hardware company, and as Taher Behbehani, the head of its U.S. mobile B2B division, told me, Samsung’s tablet sales actually doubled in the last year, and most of these were for industrial deployments and business-specific solutions. To better serve this market, the company today announced that it is bringing the rugged Tab Active Pro to the U.S. market. Previously, it was only available in Europe.

The Active Pro, with its 10.1″ display, supports Samsung’s S Pen, as well as Dex for using it on the desktop. It’s got all of the dust and water-resistance you would expect from a rugged device, is rated to easily support drops from about four feet high and promises up to 15 hours of battery life. It also features LTE connectivity and has an NFC reader on the back to allow you to badge into a secure application or take contactless payments (which are quite popular in most of the world but are only very slowly becoming a thing in the U.S.), as well as a programmable button to allow business users and frontline workers to open any application they select (like a barcode scanner).

“The traditional rugged devices out there are relatively expensive, relatively heavy to carry around for a full shift,” Samsung’s Chris Briglin told me. “Samsung is growing that market by serving users that traditionally haven’t been able to afford rugged devices or have had to share them between up to four co-workers.”

Today’s event is less about hardware than software and partnerships, though. At the core of the announcements is the new Knox Partner Program, a new way for partners to create and sell applications on Samsung devices. “We work with about 100,000 developers,” said Behbehani. “Some of these developers are inside companies. Some are outside independent developers and ISVs. And what we hear from these developer communities is when they have a solution or an app, how do I get that to a customer? How do I distribute it more effectively?”

This new partner program is Samsung’s solution for that. It’s a three-tier partner program that’s an evolution of the existing Samsung Enterprise Alliance program. At the most basic level, partners get access to support and marketing assets. At all tiers, partners can also get Knox validation for their applications to highlight that they properly implement all of the Knox APIs.

The free Bronze tier includes access to Knox SDKs and APIs, as well as licensing keys. At the Silver level, partners will get support in their region, while Gold-level members get access to the Samsung Solutions Catalog, as well as the ability to be included in the internal catalog used by Samsung sales teams globally. “This is to enable Samsung teams to find the right solutions to meet customer needs, and promote these solutions to its customers,” the company writes in today’s announcement. Gold-level partners also get access to test devices.

The other new service that will enable developers to reach more enterprises and SMBs is Project AppStack.

“When a new customer buys a Samsung device, no matter if it’s an SMB or an enterprise, depending on the information they provide to us, they get to search for and they get to select a number of different applications specifically designed to help them in their own vertical and for the size of the business,” explained Behbehani. “And once the phone is activated, these apps are downloaded through the ISV or the SaaS player through the back-end delivery mechanism which we are developing.”

For large enterprises, Samsung also runs an algorithm that looks at the size of the business and the vertical it is in to recommend specific applications, too.

Samsung will run a series of hackathons over the course of the next few months to figure out exactly how developers and its customers want to use this service. “It’s a module. It’s a technology backend. It has different components to it,” said Behbehani. “We have a number of tools already in place we have to fine- tune others and we also, to be honest, want to make sure that we come up with a POC in the marketplace that accurately reflects the requirements and the creativity of what the demand is in the marketplace.”


Enterprise – TechCrunch


How Lasers Work, According to the World’s Top Expert

October 30, 2019 No Comments

Lasers help us pay for groceries and zap us back into health, but what’s their secret? Nobel laureate Donna Strickland steps us through the science.
Feed: All Latest


Negative Keywords: How to Use Them and Why They’re Important

October 30, 2019 No Comments

Now that you have your basic PPC account set up and running, you will need to implement some negative keywords. If you aren’t familiar with what those are or how to find them, you have come to the right place! In this blog, I cover basic strategies for implementing and finding negative keywords for your accounts. 

Read more at PPCHero.com
PPC Hero


The evolution of Google’s rel “no follow”

October 30, 2019 No Comments

Google updated the no-follow attribute on Tuesday 10th September 2019 regarding which they say it aims to help fight comment spam. The Nofollow attribute has remained unchanged for 15 years, but Google has had to make this change as the web evolves.

Google also announced two new link attributes to help website owners and webmasters clearly call out what type for link is being used,

rel=”sponsored”: Use the sponsored attribute to identify links on your site that were created as part of advertisements, sponsorships or other compensation agreements.

rel=”ugc”: UGC stands for User Generated Content, and the ugc attribute value is recommended for links within user-generated content, such as comments and forum posts.

rel=”nofollow”: Use this attribute for cases where you want to link to a page but don’t want to imply any type of endorsement, including passing along ranking credit to another page.

March 1st, 2020 changes

Up until the 1st of March 2020, all of the link attributes will serve as a hint for ranking purposes, anyone that was relying on the rel=nofollow to try and block a page from being indexed should look at using other methods to block pages from being crawled or indexed.

John Mueller mentioned the use of the rel=sponsered in one of the recent Google Hangouts.

Source: YouTube

The question he was asked

“Our website has a growing commerce strategy and some members of our team believe that affiliate links are detrimental to our website ranking for other terms do we need to nofollow all affiliate links? If we don’t will this hurt our organic traffic?”

John Mueller’s answer

“So this is something that, I think comes up every now and then, from our point of view affiliate links are links that are placed with a kind of commercial background there, in that you are obviously trying to earn some money by having these affiliate link and pointing to a distributor that you trust and have some kind of arrangement with them.

From our point of view that is perfectly fine, that’s away on monetizing your website your welcome to do that.

We do kind of expect that these types of links are marked appropriately so that we understand these are affiliate links, one way to do that is to use just a nofollow.

A newer way to do that to let us know about this kind of situation is to use the sponsored rel link attribute, that link attribute specifically tells us this is something to do with an advertising relationship, we treat that the same as a no-follow.

A lot of the affiliate links out there follow really clear patterns and we can recognize those so we try to take care of those on our side when we can  but to be safe we recommend just using a nofollow or rel sponsered link attribute, but in general this isn’t something that would really harm your website if you don’t do it, its something that makes it a little clearer for us what these links are for and if we see for example a website is engaging in large scale link selling then that’s something where we might take manual action, but for the most part if our algorithms just recognize these are links we don’t want to count then we just won’t count them.”

How quickly are website owners acting on this?

This was only announced by Google in September and website owners have until march to make the change required but data from Semrush show that website owners are starting to change over to the new rel link attribute with.

The data shows that out of From one million domains, only 27,763 has at least one UGC link but the interesting fact is that if we’ll look at those 27,763 domains that have at least one UGC link, each domain from this list on average has 20,904,603 follow backlinks, 6,373,970 – no follow, 22.8 – UGC, 55.5 – sponsored.

Source: Semrush.com

This is still very early days but we can see that there is change and I would expect that to grow significantly into next year.

Conclusion

I believe that Google is going to use the data from these link attributes to catch out website owners that continue to sell links and mark them up incorrectly in order to pass any sort of SEO value other to another website in any sort of agreement Paid or otherwise.

Paul Lovell is an SEO Consultant And Founder at Always Evolving SEO. He can be found on Twitter @_PaulLovell.

The post The evolution of Google’s rel “no follow” appeared first on Search Engine Watch.

Search Engine Watch


Tech giants still not doing enough to fight fakes, says European Commission

October 30, 2019 No Comments

It’s a year since the European Commission got a bunch of adtech giants together to spill ink on a voluntary Code of Practice to do something — albeit, nothing very quantifiable — as a first step to stop the spread of disinformation online.

Its latest report card on this voluntary effort sums to the platforms could do better.

The Commission said the same in January. And will doubtless say it again. Unless or until regulators grasp the nettle of online business models that profit by maximizing engagement. As the saying goes, lies fly while the truth comes stumbling after. So attempts to shrink disinformation without fixing the economic incentives to spread BS in the first place are mostly dealing in cosmetic tweaks and optics.

Signatories to the Commission’s EU Code of Practice on Disinformation are: Facebook, Google, Twitter, Mozilla, Microsoft and several trade associations representing online platforms, the advertising industry, and advertisers — including the Internet Advertising Bureau (IAB) and World Federation of Advertisers (WFA).

In a press release assessing today’s annual reports, compiled by signatories, the Commission expresses disappointment that no other Internet platforms or advertising companies have signed up since Microsoft joined as a late addition to the Code this year.

“We commend the commitment of the online platforms to become more transparent about their policies and to establish closer cooperation with researchers, fact-checkers and Member States. However, progress varies a lot between signatories and the reports provide little insight on the actual impact of the self-regulatory measures taken over the past year as well as mechanisms for independent scrutiny,” write commissioners Věra Jourová, Julian King, and Mariya Gabriel said in a joint statement. [emphasis ours]

“While the 2019 European Parliament elections in May were clearly not free from disinformation, the actions and the monthly reporting ahead of the elections contributed to limiting the space for interference and improving the integrity of services, to disrupting economic incentives for disinformation, and to ensuring greater transparency of political and issue-based advertising. Still, large-scale automated propaganda and disinformation persist and there is more work to be done under all areas of the Code. We cannot accept this as a new normal,” they add.

The risk, of course, is that the Commission’s limp-wristed code risks rapidly cementing a milky jelly of self-regulation in the fuzzy zone of disinformation as the new normal, as we warned when the Code launched last year.

The Commission continues to leave the door open (a crack) to doing something platforms can’t (mostly) ignore — i.e. actual regulation — saying it’s assessment of the effectiveness of the Code remains ongoing.

But that’s just a dangled stick. At this transitionary point between outgoing and incoming Commissions, it seems content to stay in a ‘must do better’ holding pattern. (Or: “It’s what the Commission says when it has other priorities,” as one source inside the institution put it.)

A comprehensive assessment of how the Code is working is slated as coming in early 2020 — i.e. after the new Commission has taken up its mandate. So, yes, that’s the sound of the can being kicked a few more months on.

Summing up its main findings from signatories’ self-marked ‘progress’ reports, the outgoing Commission says they have reported improved transparency between themselves vs a year ago on discussing their respective policies against disinformation. 

But it flags poor progress on implementing commitments to empower consumers and the research community.

“The provision of data and search tools is still episodic and arbitrary and does not respond to the needs of researchers for independent scrutiny,” it warns. 

This is ironically an issue that one of the signatories, Mozilla, has been an active critic of others over — including Facebook, whose political ad API it reviewed damningly this year, finding it not fit for purpose and “designed in ways that hinders the important work of researchers, who inform the public and policymakers about the nature and consequences of misinformation”. So, er, ouch.

The Commission is also critical of what it says are “significant” variations in the scope of actions undertaken by platforms to implement “commitments” under the Code, noting also differences in implementation of platform policy; cooperation with stakeholders; and sensitivity to electoral contexts persist across Member States; as well as differences in EU-specific metrics provided.

But given the Code only ever asked for fairly vague action in some pretty broad areas, without prescribing exactly what platforms were committing themselves to doing, nor setting benchmarks for action to be measured against, inconsistency and variety is really what you’d expect. That and the can being kicked down the road. 

The Code did extract one quasi-firm commitment from signatories — on the issue of bot detection and identification — by getting platforms to promise to “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”.

A year later it’s hard to see clear sign of progress on that goal. Although platforms might argue that what they claim is increased effort toward catching and killing malicious bot accounts before they have a chance to spread any fakes is where most of their sweat is going on that front.

Twitter’s annual report, for instance, talks about what it’s doing to fight “spam and malicious automation strategically and at scale” on its platform — saying its focus is “increasingly on proactively identifying problematic accounts and behaviour rather than waiting until we receive a report”; after which it says it aims to “challenge… accounts engaging in spammy or manipulative behavior before users are ​exposed to ​misleading, inauthentic, or distracting content”.

So, in other words, if Twitter does this perfectly — and catches every malicious bot before it has a chance to tweet — it might plausibly argue that bot labels are redundant. Though it’s clearly not in a position to claim it’s won the spam/malicious bot war yet. Ergo, its users remain at risk of consuming inauthentic tweets that aren’t clearly labeled as such (or even as ‘potentially suspect’ by Twitter). Presumably because these are the accounts that continue slipping under its bot-detection radar.

There’s also nothing in Twitter’s report about it labelling even (non-malicious) bot accounts as bots — for the purpose of preventing accidental confusion (after all satire misinterpreted as truth can also result in disinformation). And this despite the company suggesting a year ago that it was toying with adding contextual labels to bot accounts, at least where it could detect them.

In the event it’s resisted adding any more badges to accounts. While an internal reform of its verification policy for verified account badges was put on pause last year.

Facebook’s report also only makes a passing mention of bots, under a section sub-headed “spam” — where it writes circularly: “Content actioned for spam has increased considerably, since we found and took action on more content that goes against our standards.”

It includes some data-points to back up this claim of more spam squashed — citing a May 2019 Community Standards Enforcement report — where it states that in Q4 2018 and Q1 2019 it acted on 1.8 billion pieces of spam in each of the quarters vs 737 million in Q4 2017; 836 million in Q1 2018; 957 million in Q2 2018; and 1.2 billion in Q3 2018. 

Though it’s lagging on publishing more up-to-date spam data now, noting in the report submitted to the EC that: “Updated spam metrics are expected to be available in November 2019 for Q2 and Q3 2019″ — i.e. conveniently late for inclusion in this report.

Facebook’s report notes ongoing efforts to put contextual labels on certain types of suspect/partisan content, such as labelling photos and videos which have been independently fact-checked as misleading; labelling state-controlled media; and labelling political ads.

Labelling bots is not discussed in the report — presumably because Facebook prefers to focus attention on self-defined spam-removal metrics vs muddying the water with discussion of how much suspect activity it continues to host on its platform, either through incompetence, lack of resources or because it’s politically expedient for its business to do so.

Labelling all these bots would mean Facebook signposting inconsistencies in how it applies its own policies –in a way that might foreground its own political bias. And there’s no self-regulatory mechanism under the sun that will make Facebook fess up to such double-standards.

For now, the Code’s requirement for signatories to publish an annual report on what they’re doing to tackle disinformation looks to be the biggest win so far. Albeit, it’s very loosely bound self-reporting. While some of these ‘reports’ don’t even run to a full page of A4-text — so set your expectations accordingly.

The Commission has published all the reports here. It has also produced its own summary and assessment of them (here).

“Overall, the reporting would benefit from more detailed and qualitative insights in some areas and from further big-picture context, such as trends,” it writes. “In addition, the metrics provided so far are mainly output indicators rather than impact indicators.”

Of the Code generally — as a “self-regulatory standard” — the Commission argues it has “provided an opportunity for greater transparency into the platforms’ policies on disinformation as well as a framework for structured dialogue to monitor, improve and effectively implement those policies”, adding: “This represents progress over the situation prevailing before the Code’s entry into force, while further serious steps by individual signatories and the community as a whole are still necessary.”


Social – TechCrunch


Yext Answers helps businesses provide better site search

October 29, 2019 No Comments

Yext helps businesses manage their presence on search and across the web; starting today, with the launch of Yext Answers, it’s also helping them provide a better experience on their own websites.

“It lets any company with a website answer a question about their own brand in a Google-like experience on their own site,” CEO Howard Lerman told me.

While Lerman is officially announcing Yext Answers onstage at the company’s Onward conference this afternoon, the issue is clearly one he’s been thinking about for a while — in an interview earlier this year, he described user-generated content as “tyranny,” and claimed the company’s “founding principle is that the ultimate authority on how many calories are in a Big Mac is McDonald’s.”

It’s a theme that Lerman returned to when he demonstrated the new product for me yesterday, running a number of Google searches — such as “student checking account” — where a brand might want to be relevant, but where the results mostly come from SEO-optimized advice and how-to articles from third-party sites.

“The world of search became pretty cluttered with all these self-declared experts,” he said.

Answers Comparison AnswersNotLinks 1

The goal with Yext Answers is to turn a brand’s website into the source that consumers turn to for information on these topics. Lerman said the big obstacle is the simple fact that most site search is pretty bad: “The algorithms that are there today are the algorithms of 1995. It’s keyword-based document search.”

So if you don’t enter exactly the right keywords in exactly the right order, you don’t get useful results. Yext, on the other hand, has supposedly spent two years building its own search engine, with natural language processing technology.

As Lerman showed me, that means it can handle more complex, conversational queries like “broccoli cheese soup recipes in 10 minutes or less.” He also pointed out how Yext has tried to follow Google’s lead in presenting the results in a variety of formats, whether that’s just a straightforward answer to a question, or maps if you’re searching for store locations.

In addition, Yext Answers customers will get analytics about what people are searching for on their site. If people are searching for a question that the site isn’t answering, businesses can then take advantage of their company’s knowledge base to publish something new — and that, in turn, could also help them show up in search results elsewhere.

BBVA LiveExample3 1

Yext Answers has been beta testing with companies like Three Mobile, BBVA USA, IHA and Healthcare Associates of Texas. You also can try it out for yourself on the Yext site.

“Yext Answers represents a level of sophistication that elevates our current search into a predictive, insightful tool that provides opportunities to better understand what our patient population is interested in finding on our site,” said Lori Gillen, marketing director at Healthcare Associates of Texas, in a statement. “It is intelligent enough to understand complex relationships between HCAT-specific facts, like doctors to procedures or specialties to locations, and give insights into what our patients want to know.”

Yext Answers is now available in English-speaking countries.


Enterprise – TechCrunch


Spider eyes inspire a new kind of depth-sensing camera

October 29, 2019 No Comments

As robots and gadgets continue to pervade our everyday lives, they increasingly need to see in 3D — but as evidenced by the notch in your iPhone, depth-sensing cameras are still pretty bulky. A new approach inspired by how some spiders sense the distance to their prey could change that.

Jumping spiders don’t have room in their tiny, hairy heads for structured light projectors and all that kind of thing. Yet they have to see where they’re going and what they’re grabbing in order to be effective predators. How do they do it? As is usually the case with arthropods, in a super weird but interesting way.

Instead of having multiple eyes capturing a slightly different image and taking stereo cues from that, as we do, each of the spider’s eyes is in itself a depth-sensing system. Each eye is multi-layered, with transparent retinas seeing the image with different amounts of blur depending on distance. The differing blurs from different eyes and layers are compared in the spider’s small nervous system and produce an accurate distance measurement — using incredibly little in the way of “hardware.”

Researchers at Harvard have created a high-tech lens system that uses a similar approach, producing the ability to sense depth without traditional optical elements.

cover1

The “metalens” created by electrical engineering professor Federico Capasso and his team detects an incoming image as two similar ones with different amounts of blur, like the spider’s eye does. These images are compared using an algorithm also like the spider’s — at least in that it is very quick and efficient — and the result is a lovely little real-time, whole-image depth calculation.

FlyGif

The process is not only efficient, meaning it can be done with very little computing hardware and power, but it can be extremely compact: the one used for this experiment was only 3 millimeters across.

This means it could be included not just on self-driving cars and industrial robots but on small gadgets, smart home items and, of course, phones — it probably won’t replace Face ID, but it’s a start.

The paper describing the metalens system will be published today in the Proceedings of the National Academy of Sciences.

Gadgets – TechCrunch


Powered by WP Robot