Best Business Search

Tag: leaks

The Real Impact of the Grand Theft Auto and Diablo Leaks

September 20, 2022 No Comments

When hackers release game information early, it’s the developers that suffer—not the players.
Feed: All Latest


1.8 TB of Police Helicopter Surveillance Footage Leaks Online

November 6, 2021 No Comments

DDoSecrets published the trove Friday afternoon. Privacy advocates say it shows how pervasive law enforcement’s eye has become, and how lax its data protection can be.
Feed: All Latest


Facebook data misuse and voter manipulation back in the frame with latest Cambridge Analytica leaks

January 7, 2020 No Comments

More details are emerging about the scale and scope of disgraced data company Cambridge Analytica’s activities in elections around the world — via a cache of internal documents that’s being released by former employee and self-styled whistleblower, Brittany Kaiser.

The now shut down data modelling company, which infamously used stolen Facebook data to target voters for President Donald Trump’s campaign in the 2016 U.S. election, was at the center of the data misuse scandal that, in 2018, wiped billions off Facebook’s share price and contributed to a $ 5BN FTC fine for the tech giant last summer.

However plenty of questions remain, including where, for whom and exactly how Cambridge Analytica and its parent entity SCL Elections operated; as well as how much Facebook’s leadership knew about the dealings of the firm that was using its platform to extract data and target political ads — helped by some of Facebook’s own staff.

Certain Facebook employees were referring to Cambridge Analytica as a “sketchy” company as far back as September 2015 — yet the tech giant only pulled the plug on platform access after the scandal went global in 2018.

Facebook CEO Mark Zuckerberg has also continued to maintain that he only personally learned about CA from a December 2015 Guardian article, which broke the story that Ted Cruz’s presidential campaign was using psychological data based on research covering tens of millions of Facebook users, harvested largely without permission. (It wasn’t until March 2018 that further investigative journalism blew the lid off the story — turning it into a global scandal.)

Former Cambridge Analytica business development director Kaiser, who had a central role in last year’s Netflix documentary about the data misuse scandal (The Great Hack), began her latest data dump late last week — publishing links to scores of previously unreleased internal documents via a Twitter account called @HindsightFiles. (At the time of writing Twitter has placed a temporary limit on viewing the account — citing “unusual activity”, presumably as a result of the volume of downloads it’s attracting.)

Since becoming part of the public CA story Kaiser has been campaigning for Facebook to grant users property rights over their data. She claims she’s releasing new documents from her former employer now because she’s concerned this year’s US election remains at risk of the same type of big-data-enabled voter manipulation that tainted the 2016 result.

“I’m very fearful about what is going to happen in the US election later this year, and I think one of the few ways of protecting ourselves is to get as much information out there as possible,” she told The Guardian.

“Democracies around the world are being auctioned to the highest bidder,” is the tagline clam on the Twitter account Kaiser is using to distribute the previously unpublished documents — more than 100,000 of which are set to be released over the coming months, per the newspaper’s report.

The releases are being grouped into countries — with documents to-date covering Brazil, Kenya and Malaysia. There is also a themed release dealing with issues pertaining to Iran, and another covering CA/SCL’s work for Republican John Bolton’s Political Action Committee in the U.S.

The releases look set to underscore the global scale of CA/SCL’s social media-fuelled operations, with Kaiser writing that the previously unreleased emails, project plans, case studies and negotiations span at least 65 countries.

A spreadsheet of associate officers included in the current cache lists SCL associates in a large number of countries and regions including Australia, Argentina, the Balkans, India, Jordan, Lithuania, the Philippines, Switzerland and Turkey, among others. A second tab listing “potential” associates covers political and commercial contacts in various other places including Ukraine and even China.

A UK parliamentary committee which investigated online political campaigning and voter manipulation in 2018 — taking evidence from Kaiser and CA whistleblower Chris Wylie, among others — urged the government to audit the PR and strategic communications industry, warning in its final report how “easy it is for discredited companies to reinvent themselves and potentially use the same data and the same tactics to undermine governments, including in the UK”.

“Data analytics firms have played a key role in elections around the world. Strategic communications companies frequently run campaigns internationally, which are financed by less than transparent means and employ legally dubious methods,” the DCMS committee also concluded.

The committee’s final report highlighted election and referendum campaigns SCL Elections (and its myriad “associated companies”) had been involved in in around thirty countries. But per Kaiser’s telling its activities — and/or ambitions — appear to have been considerably broader and even global in scope.

Documents released to date include a case study of work that CA was contracted to carry out in the U.S. for Bolton’s Super PAC — where it undertook what is described as “a personality-targeted digital advertising campaign with three interlocking goals: to persuade voters to elect Republican Senate candidates in Arkansas, North Carolina and New Hampshire; to elevate national security as an issue of importance and to increase public awareness of Ambassador Bolton’s Super PAC”.

Here CA writes that it segmented “persuadable and low-turnout voter populations to identify several key groups that could be influenced by Bolton Super PAC messaging”, targeting them with online and Direct TV ads — designed to “appeal directly to specific groups’ personality traits, priority issues and demographics”. 

Psychographic profiling — derived from CA’s modelling of Facebook user data — was used to segment U.S. voters into targetable groups, including for serving microtargeted online ads. The company badged voters with personality-specific labels such as “highly neurotic” — targeting individuals with customized content designed to pray on their fears and/or hopes based on its analysis of voters’ personality traits.

The process of segmenting voters by personality and sentiment was made commercially possible by access to identity-linked personal data — which puts Facebook’s population-scale collation of identities and individual-level personal data squarely in the frame.

It was a cache of tens of millions of Facebook profiles, along with responses to a personality quiz app linked to Facebook accounts, which was sold to Cambridge Analytica in 2014, by a company called GSR, and used to underpin its psychographic profiling of U.S. voters.

In evidence to the DCMS committee last year GSR’s co-founder, Aleksandr Kogan, argued that Facebook did not have a “valid” developer policy at the time, since he said the company did nothing to enforce the stated T&Cs — meaning users’ data was wide open to misappropriation and exploitation.

The UK’s data protection watchdog also took a dim view. In 2018 it issued Facebook with the maximum fine possible, under relevant national law, for the CA data breach — and warned in a report that democracy is under threat. The country’s information commissioner also called for an “ethical pause” of the use of online microtargeting ad tools for political campaigning.

No such pause has taken place.

Meanwhile for its part, since the Cambridge Analytica scandal snowballed into global condemnation of its business, Facebook has made loud claims to be ‘locking down’ its platform — including saying it would conduct an app audit and “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; and “ban any developer from our platform that does not agree to a thorough audit”.

However, close to two years later, there’s still no final report from the company on the upshot of this self ‘audit’.

And while Facebook was slapped with a headline-grabbing FTC fine on home soil, there was in fact no proper investigation; no requirement for it to change its privacy-hostile practices; and blanket immunity for top execs — even for any unknown data violations in the 2012 to 2018 period. So, ummm

In another highly curious detail, GSR’s other co-founder, a data scientist called Joseph Chancellor, was in fact hired by Facebook in late 2015. The tech giant has never satisfactorily explained how it came to recruit one of the two individuals at the center of a voter manipulation data misuse scandal which continues to wreak hefty reputational damage on Zuckerberg and his platform. But being able to ensure Chancellor was kept away from the press during a period of intense scrutiny looks pretty convenient.

Last fall, the GSR co-founder was reported to have left Facebook — as quietly, and with as little explanation given, as when he arrived on the tech giant’s payroll.

So Kaiser seems quite right to be concerned that the data industrial complex will do anything to keep its secrets — given it’s designed and engineered to sell access to yours. Even as she has her own reasons to want to keep the story in the media spotlight.

Platforms whose profiteering purpose is to track and target people at global scale — which function by leveraging an asymmetrical ‘attention economy’ — have zero incentive to change or have change imposed upon them. Not when the propaganda-as-a-service business remains in such high demand, whether for selling actual things like bars of soap, or for hawking ideas with a far darker purpose.


Social – TechCrunch


GitGuardian raises $12M to help developers write more secure code and ‘fix’ GitHub leaks

December 5, 2019 No Comments

Data breaches that could cause millions of dollars in potential damages have been the bane of the life of many a company. What’s required is a great deal of real-time monitoring. The problem is that this world has become incredibly complex. A SANS Institute survey found half of company data breaches were the result of account or credential hacking.

GitGuardian has attempted to address this with a highly developer-centric cybersecurity solution.

It’s now attracted the attention of major investors, to the tune of $ 12 million in Series A funding, led by Balderton Capital . Scott Chacon, co-founder of GitHub, and Solomon Hykes, founder of Docker, also participated in the round.

The startup plans to use the investment from Balderton Capital to expand its customer base, predominantly in the U.S. Around 75% of its clients are currently based in the U.S., with the remainder being based in Europe, and the funding will continue to drive this expansion.

Built to uncover sensitive company information hiding in online repositories, GitGuardian says its real-time monitoring platform can address the data leaks issues. Modern enterprise software developers have to integrate multiple internal and third-party services. That means they need incredibly sensitive “secrets,” such as login details, API keys and private cryptographic keys used to protect confidential systems and data.

GitGuardian’s systems detect thousands of credential leaks per day. The team originally built its launch platform with public GitHub in mind; however, GitGuardian is built as a private solution to monitor and notify on secrets that are inappropriately disseminated in internal systems as well, such as private code repositories or messaging systems.

Solomon Hykes, founder of Docker and investor at GitGuardian, said: “Securing your systems starts with securing your software development process. GitGuardian understands this, and they have built a pragmatic solution to an acute security problem. Their credentials monitoring system is a must-have for any serious organization.”

Do they have any competitors?

Co-founder Jérémy Thomas told me: “We currently don’t have any direct competitors. This generally means that there’s no market, or the market is too small to be interesting. In our case, our fundraise proves we’ve put our hands on something huge. So the reason we don’t have competitors is because the problem we’re solving is counterintuitive at first sight. Ask any developer, they will say they would never hardcode any secret in public source code. However, humans make mistakes and when that happens, they can be extremely serious: it can take a single leaked credential to jeopardize an entire organization. To conclude, I’d say our real competitors so far are black hat hackers. Black hat activity is real on GitHub. For two years, we’ve been monitoring organized groups of hackers that exchange sensitive information they find on the platform. We are competing with them on speed of detection and scope of vulnerabilities covered.”


Enterprise – TechCrunch


Twitter ‘fesses up to more adtech leaks

August 7, 2019 No Comments

Twitter has disclosed more bugs related to how it uses personal data for ad targeting that means it may have shared users data with advertising partners even when a user had expressly told it not to.

Back in May the social network disclosed a bug that in certain conditions resulted in an account’s location data being shared with a Twitter ad partner, during real-time bidding (RTB) auctions.

In a blog post on its Help Center about the latest “issues” Twitter says it “recently” found, it admits to finding two problems with users’ ad settings choices that mean they “may not have worked as intended”.

It claims both problems were fixed on August 5. Though it does not specify when it realized it was processing user data without their consent.

The first bug relates to tracking ad conversions. This meant that if a Twitter user clicked or viewed an ad for a mobile application on the platform and subsequently interacted with the mobile app Twitter says it “may have shared certain data (e.g., country code; if you engaged with the ad and when; information about the ad, etc)” with its ad measurement and advertising partners — regardless of whether the user had agreed their personal data could be shared in this way.

It suggests this leak of data has been happening since May 2018 — which is also the day when Europe’s updated privacy framework, GDPR, came into force. The regulation mandates disclosure of data breaches (which explains why you’re hearing about all these issues from Twitter) — and means that quite a lot is riding on how “recently” Twitter found these latest bugs. Because GDPR also includes a supersized regime of fines for confirmed data protection violations.

Though it remains to be seen whether Twitter’s now repeatedly leaky adtech will attract regulatory attention…

Twitter specifies that it does not share users’ names, Twitter handles, email or phone number with ad partners. However it does share a user’s mobile device identifier, which GDPR treats as personal data as it acts as a unique identifier. Using this identifier, Twitter and Twitter’s ad partners can work together to link a device identifier to other pieces of identity-linked personal data they collectively hold on the same user to track their use of the wider Internet, thereby allowing user profiling and creepy ad targeting to take place in the background.

The second issue Twitter discloses in the blog post also relates to tracking users’ wider web browsing to serve them targeted ads.

Here Twitter admits that, since September 2018, it may have served targeted ads that used inferences made about the user’s interests based on tracking their wider use of the Internet — even when the user had not given permission to be tracked.

This sounds like another breach of GDPR, given that in cases where the user did not consent to being tracked for ad targeting Twitter would lack a legal basis for processing their personal data. But it’s saying it processed it anyway — albeit, it claims accidentally.

This type of creepy ad targeting — based on so-called ‘inferences’ — is made possible because Twitter associates the devices you use (including mobile and browsers) when you’re logged in to its service with your Twitter account, and then receives information linked to these same device identifiers (IP addresses and potentially browser fingerprinting) back from its ad partners, likely gathered via tracking cookies (including Twitter’s own social plug-ins) which are larded all over the mainstream Internet for the purpose of tracking what you look at online.

These third party ad cookies link individuals’ browsing data (which gets turned into inferred interests) with unique device/browser identifiers (linked to individuals) to enable the adtech industry (platforms, data brokers, ad exchanges and so on) to track web users across the web and serve them “relevant” (aka creepy) ads.

“As part of a process we use to try and serve more relevant advertising on Twitter and other services since September 2018, we may have shown you ads based on inferences we made about the devices you use, even if you did not give us permission to do so,” it how Twitter explains this second ‘issue’.

“The data involved stayed within Twitter and did not contain things like passwords, email accounts, etc.,” it adds. Although the key point here is one of a lack of consent, not where the data ended up.

(Also, the users’ wider Internet browsing activity linked to their devices via cookie tracking did not originate with Twitter — even if it’s claiming the surveillance files it received from its “trusted” partners stayed on its servers. Bits and pieces of that tracked data would, in any case, exist all over the place.)

In an explainer on its website on “personalization based on your inferred identity” Twitter seeks to reassure users that it will not track them without their consent, writing:

We are committed to providing you meaningful privacy choices. You can control whether we operate and personalize your experience based on browsers or devices other than the ones you use to log in to Twitter (or if you’re logged out, browsers or devices other than the one you’re currently using), or email addresses and phone numbers similar to those linked to your Twitter account. You can do this by visiting your Personalization and data settings and adjusting the Personalize based on your inferred identity setting.

The problem in this case is that users’ privacy choices were simply overridden. Twitter says it did not do so intentionally. But either way it’s not consent. Ergo, a breach.

“We know you will want to know if you were personally affected, and how many people in total were involved. We are still conducting our investigation to determine who may have been impacted and If we discover more information that is useful we will share it,” Twitter goes on. “What is there for you to do? Aside from checking your settings, we don’t believe there is anything for you to do.

“You trust us to follow your choices and we failed here. We’re sorry this happened, and are taking steps to make sure we don’t make a mistake like this again. If you have any questions, you may contact Twitter’s Office of Data Protection through this form.”

While the company may “believe” there is nothing Twitter users can do — aside from accept its apology for screwing up — European Twitter users who believe it processed their data without their consent do have a course of action they can take: They can complain to their local data protection watchdog.

Zooming out, there are also major legal question marks hanging over behaviourally targeted ads in Europe.

The UK’s privacy regulator warned in June that systematic profiling of web users via invasive tracking technologies such as cookies is in breach of pan-EU privacy laws — following multiple complaints filed in the region that argue RTB is in breach of the GDPR.

While, back in May Google’s lead regulator in Europe, the Irish Data Protection Commission, confirmed it has opened a formal investigation into use of personal data in the context of its online Ad Exchange.

So the wider point here is that the whole leaky business of creepy ads looks to be operating on borrowed time.


Social – TechCrunch


Twitter ‘fesses up to more adtech leaks

August 7, 2019 No Comments

Twitter has disclosed more bugs related to how it uses personal data for ad targeting that means it may have shared users data with advertising partners even when a user had expressly told it not to.

Back in May the social network disclosed a bug that in certain conditions resulted in an account’s location data being shared with a Twitter ad partner, during real-time bidding (RTB) auctions.

In a blog post on its Help Center about the latest “issues” Twitter says it “recently” found, it admits to finding two problems with users’ ad settings choices that mean they “may not have worked as intended”.

It claims both problems were fixed on August 5. Though it does not specify when it realized it was processing user data without their consent.

The first bug relates to tracking ad conversions. This meant that if a Twitter user clicked or viewed an ad for a mobile application on the platform and subsequently interacted with the mobile app Twitter says it “may have shared certain data (e.g., country code; if you engaged with the ad and when; information about the ad, etc)” with its ad measurement and advertising partners — regardless of whether the user had agreed their personal data could be shared in this way.

It suggests this leak of data has been happening since May 2018 — which is also the day when Europe’s updated privacy framework, GDPR, came into force. The regulation mandates disclosure of data breaches (which explains why you’re hearing about all these issues from Twitter) — and means that quite a lot is riding on how “recently” Twitter found these latest bugs. Because GDPR also includes a supersized regime of fines for confirmed data protection violations.

Though it remains to be seen whether Twitter’s now repeatedly leaky adtech will attract regulatory attention…

Twitter specifies that it does not share users’ names, Twitter handles, email or phone number with ad partners. However it does share a user’s mobile device identifier, which GDPR treats as personal data as it acts as a unique identifier. Using this identifier, Twitter and Twitter’s ad partners can work together to link a device identifier to other pieces of identity-linked personal data they collectively hold on the same user to track their use of the wider Internet, thereby allowing user profiling and creepy ad targeting to take place in the background.

The second issue Twitter discloses in the blog post also relates to tracking users’ wider web browsing to serve them targeted ads.

Here Twitter admits that, since September 2018, it may have served targeted ads that used inferences made about the user’s interests based on tracking their wider use of the Internet — even when the user had not given permission to be tracked.

This sounds like another breach of GDPR, given that in cases where the user did not consent to being tracked for ad targeting Twitter would lack a legal basis for processing their personal data. But it’s saying it processed it anyway — albeit, it claims accidentally.

This type of creepy ad targeting — based on so-called ‘inferences’ — is made possible because Twitter associates the devices you use (including mobile and browsers) when you’re logged in to its service with your Twitter account, and then receives information linked to these same device identifiers (IP addresses and potentially browser fingerprinting) back from its ad partners, likely gathered via tracking cookies (including Twitter’s own social plug-ins) which are larded all over the mainstream Internet for the purpose of tracking what you look at online.

These third party ad cookies link individuals’ browsing data (which gets turned into inferred interests) with unique device/browser identifiers (linked to individuals) to enable the adtech industry (platforms, data brokers, ad exchanges and so on) to track web users across the web and serve them “relevant” (aka creepy) ads.

“As part of a process we use to try and serve more relevant advertising on Twitter and other services since September 2018, we may have shown you ads based on inferences we made about the devices you use, even if you did not give us permission to do so,” it how Twitter explains this second ‘issue’.

“The data involved stayed within Twitter and did not contain things like passwords, email accounts, etc.,” it adds. Although the key point here is one of a lack of consent, not where the data ended up.

(Also, the users’ wider Internet browsing activity linked to their devices via cookie tracking did not originate with Twitter — even if it’s claiming the surveillance files it received from its “trusted” partners stayed on its servers. Bits and pieces of that tracked data would, in any case, exist all over the place.)

In an explainer on its website on “personalization based on your inferred identity” Twitter seeks to reassure users that it will not track them without their consent, writing:

We are committed to providing you meaningful privacy choices. You can control whether we operate and personalize your experience based on browsers or devices other than the ones you use to log in to Twitter (or if you’re logged out, browsers or devices other than the one you’re currently using), or email addresses and phone numbers similar to those linked to your Twitter account. You can do this by visiting your Personalization and data settings and adjusting the Personalize based on your inferred identity setting.

The problem in this case is that users’ privacy choices were simply overridden. Twitter says it did not do so intentionally. But either way it’s not consent. Ergo, a breach.

“We know you will want to know if you were personally affected, and how many people in total were involved. We are still conducting our investigation to determine who may have been impacted and If we discover more information that is useful we will share it,” Twitter goes on. “What is there for you to do? Aside from checking your settings, we don’t believe there is anything for you to do.

“You trust us to follow your choices and we failed here. We’re sorry this happened, and are taking steps to make sure we don’t make a mistake like this again. If you have any questions, you may contact Twitter’s Office of Data Protection through this form.”

While the company may “believe” there is nothing Twitter users can do — aside from accept its apology for screwing up — European Twitter users who believe it processed their data without their consent do have a course of action they can take: They can complain to their local data protection watchdog.

Zooming out, there are also major legal question marks hanging over behaviourally targeted ads in Europe.

The UK’s privacy regulator warned in June that systematic profiling of web users via invasive tracking technologies such as cookies is in breach of pan-EU privacy laws — following multiple complaints filed in the region that argue RTB is in breach of the GDPR.

While, back in May Google’s lead regulator in Europe, the Irish Data Protection Commission, confirmed it has opened a formal investigation into use of personal data in the context of its online Ad Exchange.

So the wider point here is that the whole leaky business of creepy ads looks to be operating on borrowed time.


Social – TechCrunch


Powered by WP Robot