Privacy Enhancements for Android

Privacy Enhancements for Android (PE for Android) is a platform for exploring concepts in regulating access to private information on mobile devices. The goal is to create an extensible privacy system that abstracts away the details of various privacy-preserving technologies. PE for Android allows app developers to safely leverage state-of-the-art privacy techniques without knowledge of esoteric underlying technologies. Further, PE for Android helps users to take ownership of their private information by presenting them with more intuitive controls and permission enforcement. The platform was developed as a fork of the Android Open Source Project (AOSP) release for Android 9 “Pie” and can be installed as a Generic System Image (GSI) on a Project Treble-compliant device.

Source: Privacy Enhancements for Android

Under DARPA’s Brandeis program, a team of researchers led by Two Six Labs and Raytheon BBN Technologies have developed a platform called Privacy Enhancements for Android (PE for Android) to explore more expressive concepts in regulating access to private information on mobile devices. PE for Android seeks to create an extensible privacy system that abstracts away the details of various privacy-preserving technologies, allowing application developers to utilize state-of-the-art privacy techniques, such as secure multi-party computation and differential privacy, without knowledge of their underlying esoteric technologies. Importantly, PE for Android allows mobile device users to take ownership of their private information by presenting them with more intuitive controls and permission enforcement options.

Source: Researchers on DARPA’s Brandeis Program Enhance Privacy Protections for Android Applications

No cookie consent walls — and no, scrolling isn’t consent, says EU data protection body

You can’t make access to your website’s content dependent on a visitor agreeing that you can process their data — aka a ‘consent cookie wall’. Not if you need to be compliant with European data protection law.

That’s the unambiguous message from the European Data Protection Board (EDPB), which has published updated guidelines on the rules around online consent to process people’s data.

Under pan-EU law, consent is one of six lawful bases that data controllers can use when processing people’s personal data.

But in order for consent to be legally valid under Europe’s General Data Protection Regulation (GDPR) there are specific standards to meet: It must be clear and informed, specific and freely given.

Hence cookie walls that demand ‘consent’ as the price for getting inside the club are not only an oxymoron but run into a legal brick wall.

No consent behind a cookie wall

The regional cookie wall has been crumbling for some time, as we reported last year — when the Dutch DPA clarified its guidance to ban cookie walls.

The updated guidelines from the EDPB look intended to hammer the point home. The steering body’s role is to provide guidance to national data protection agencies to encourage a more consistent application of data protection rules.

The EDPB’s intervention should — should! — remove any inconsistencies of interpretation on the updated points by national agencies of the bloc’s 27 Member States. (Though compliance with EU data protection law tends to be a process; aka it’s a marathon not a sprint, though on the cookie wall issues the ‘runners’ have been going around the tracks for a considerable time now.)

As we noted in our report on the Dutch clarification last year, the Internet Advertising Bureau Europe was operating a full cookie wall — instructing visitors to ‘agree’ to its data processing terms if they wished to view the content.

The problem that we pointed out is that that wasn’t a free choice. Yet EU law requires a free choice for consent to be legally valid. So it’s interesting to note the IAB Europe has, at some point since, updated its cookie consent implementation — removing the cookie wall and offering a fairly clear (if nudged) choice to visitors to either accept or deny cookies for “aggregated statistics”…

As we said at the time the writing was on the wall for consent cookie walls.

The EDPB document includes the below example to illustrate the salient point that consent cookie walls do not “constitute valid consent, as the provision of the service relies on the data subject clicking the ‘Accept cookies’ button. It is not presented with a genuine choice.”

It’s hard to get clearer than that, really.

Scrolling never means ‘take my data’

A second area to get attention in the updated guidance, as a result of the EDPB deciding there was a need for additional clarification, is the issue of scrolling and consent.

Simply put: Scrolling on a website or digital service can not — in any way — be interpreted as consent.

Or, as the EDPB puts it, “actions such as scrolling or swiping through a webpage or similar user activity will not under any circumstances satisfy the requirement of a clear and affirmative action” [emphasis ours].

Source: No cookie consent walls — and no, scrolling isn’t consent, says EU data protection body | TechCrunch

IAB Europe Guide to the Post Third-Party Cookie Era

This Guide has been developed by experts from IAB Europe’s Programmatic Trading Committee (PTC) to prepare brands, agencies, publishers and tech intermediaries for the much-anticipated post third-party cookie advertising ecosystem.

It provides background to the current use of cookies in digital advertising today and an overview of the alternative solutions being developed. As solutions evolve, the PTC will be updating this Guide on a regular basis to provide the latest information and guidance on market alternatives to third-party cookies.

The Guide, available below as an e-book or PDF, helps to answer to the following questions:

  • What factors have contributed to the depletion of the third-party cookie?
  • How will the depletion of third-party cookies impact stakeholders and the wider industry including proprietary platforms?
  • How will the absence of third-party cookies affect the execution of digital advertising campaigns?
  • What solutions currently exist to replace the usage of third-party cookies?
  • What industry solutions are currently being developed and by whom?
  • How can I get involved in contributing to the different solutions?

Source: IAB Europe Guide to the Post Third-Party Cookie Era – IAB Europe

Yup, advertisers won’t be able to track you over the internet using 3rd party cookies anymore soon

Researchers create a new system to protect users’ online data by checking if data entered is consistent with the privacy policy

Researchers have created a new a new system that helps Internet users ensure their online data is secure.

The software-based system, called Mitigator, includes a plugin users can install in their browser that will give them a secure signal when they visit a website verified to process its data in compliance with the site’s privacy policy.

“Privacy policies are really hard to read and understand,” said Miti Mazmudar, a PhD candidate in Waterloo’s David R. Cheriton School of Computer Science. “What we try to do is have a compliance system that takes a simplified model of the privacy policy and checks the code on the website’s end to see if it does what the privacy policy claims to do.

“If a website requires you to enter your email address, Mitigator will notify you if the privacy policy stated that this wouldn’t be needed or if the privacy policy did not mention the requirement at all.”

Mitigator can work on any computer, but the companies that own the website servers must have machines with a trusted execution environment (TEE). TEE, a secure area of modern server-class processors, guarantees the protection of code and data loaded in it with respect to confidentiality and integrity.

“The big difference between Mitigator and prior systems that had similar goals is that Mitigator’s primary focus is on the signal it gives to the user,” said Ian Goldberg, a professor in Waterloo’s Faculty of Mathematics. “The important thing is not just that the company knows their software is running correctly; we want the user to get this assurance that the company’s software is running correctly and is processing their data properly and not just leaving it lying around on disk to be stolen.

“Users of Mitigator will know whether their data is being properly protected, managed, and processed while the companies will benefit in that their customers are happier and more confident that nothing untoward is being done with their data.”

The study, Mitigator: Privacy policy compliance using trusted hardware, authored by Mazmudar and Goldberg, has been accepted for publication in the Proceedings of Privacy Enhancing Technologies.

Source: Researchers create a new system to protect users’ online data | Waterloo Stories | University of Waterloo

UK COVID-19 contact tracing app data may be kept for ‘research’ after crisis ends, MPs told

Britons will not be able to ask NHS admins to delete their COVID-19 tracking data from government servers, digital arm NHSX’s chief exec Matthew Gould admitted to MPs this afternoon.

Gould also told Parliament’s Human Rights Committee that data harvested from Britons through NHSX’s COVID-19 contact tracing app would be “pseudonymised” – and appeared to leave the door open for that data to be sold on for “research”.

The government’s contact-tracing app will be rolled out in Britain this week. A demo seen by The Register showed its basic consumer-facing functions. Key to those is a big green button that the user presses to send 28 days’ worth of contact data to the NHS.

Screenshot of the NHSX covid-19 contact tracing app

Screenshot of the NHSX COVID-19 contact tracing app … Click to enlarge

Written by tech arm NHSX, Britain’s contact-tracing app breaks with international convention by opting for a centralised model of data collection, rather than keeping data on users’ phones and only storing it locally.

In response to questions from Scottish Nationalist MP Joanna Cherry this afternoon, Gould told MPs: “The data can be deleted for as long as it’s on your own device. Once uploaded all the data will be deleted or fully anonymised with the law, so it can be used for research purposes.”

Source: UK COVID-19 contact tracing app data may be kept for ‘research’ after crisis ends, MPs told • The Register

New Firefox service will generate unique email aliases to enter in online forms

Browser maker Mozilla is working on a new service called Private Relay that generates unique aliases to hide a user’s email address from advertisers and spam operators when filling in online forms.

The service entered testing last month and is currently in a closed beta, with a public beta currently scheduled for later this year, ZDNet has learned.

Private Relay will be available as a Firefox add-on that lets users generate a unique email address — an email alias — with one click.

The user can then enter this email address in web forms to send contact requests, subscribe to newsletters, and register new accounts.

“We will forward emails from the alias to your real inbox,” Mozilla says on the Firefox Private Relay website.

“If any alias starts to receive emails you don’t want, you can disable it or delete it completely,” the browser maker said.

The concept of an email alias has existed for decades, but managing them has always been a chore, or email providers didn’t allow users access to such a feature.

Through Firefox Private Relay, Mozilla hopes to provide an easy to use solution that can let users create and destroy email aliases with a few button clicks.

Source: New Firefox service will generate unique email aliases to enter in online forms | ZDNet

Brave accuses European governments of GDPR resourcing failure

Brave, a maker of a pro-privacy browser, has lodged complaints with the European Commission against 27 EU Member States for under resourcing their national data protection watchdogs.

It’s asking the European Union’s executive body to launch an infringement procedure against Member State governments, and even refer them to the bloc’s top court, the European Court of Justice, if necessary.

“Article 52(4) of the GPDR [General Data Protection Regulation] requires that national governments give DPAs the human and financial resources necessary to perform their tasks,” it notes in a press release.

Brave has compiled a report to back up the complaints — in which it chronicles a drastic shortage of tech expertise and budget resource among Europe’s privacy agencies to enforce the region’s data protection framework.

Lack of proper resource to ensure the regulation’s teeth are able to clamp down on bad behavior — as the law drafters’ intended — has been a long standing concern.

In the Irish data watchdog’s annual report in February — AKA the agency that regulates most of big tech in Europe — the lack of any decisions in major cross-border cases against a roll-call of tech giants loomed large, despite plenty of worthy filler, with reams of stats included to illustrate the massive case load of complaints the agency is now dealing with.

Ireland’s decelerating budget and headcount in the face of rising numbers of GDPR complaints is a key concern highlighted by Brave’s report.

Per the report, half of EU data protection agencies have what it dubs a small budget (sub €5M), while only five of Europe’s 28 national GDPR enforcers have more than 10 “tech specialists”, as it describes them.

“Almost a third of the EU’s tech specialists work for one of Germany’s Länder (regional) or federal DPAs,” it warns. “All other EU countries are far behind Germany.”

“Europe’s GDPR enforcers do not have the capacity to investigate Big Tech,” is its top-line conclusion.

“If the GDPR is at risk of failing, the fault lies with national governments, not with the data protection authorities,” said Dr Johnny Ryan, Brave’s chief policy & industry relations officer, in a statement. “Robust, adversarial enforcement is essential. GDPR enforcers must be able to properly investigate ‘big tech’, and act without fear of vexatious appeals. But the national governments of European countries have not given them the resources to do so. The European Commission must intervene.”

It’s worth noting that Brave is not without its own commercial interest here. It absolutely has skin in the game, as a provider of privacy-sensitive adtech.

[…]

Source: Brave accuses European governments of GDPR resourcing failure | TechCrunch

Surprise surprise, Xiaomi web browser and music player are sending data about you to China

When he looked around the Web on the device’s default Xiaomi browser, it recorded all the websites he visited, including search engine queries whether with Google or the privacy-focused DuckDuckGo, and every item viewed on a news feed feature of the Xiaomi software. That tracking appeared to be happening even if he used the supposedly private “incognito” mode.

The device was also recording what folders he opened and to which screens he swiped, including the status bar and the settings page. All of the data was being packaged up and sent to remote servers in Singapore and Russia, though the Web domains they hosted were registered in Beijing.

Meanwhile, at Forbes’ request, cybersecurity researcher Andrew Tierney investigated further. He also found browsers shipped by Xiaomi on Google Play—Mi Browser Pro and the Mint Browser—were collecting the same data. Together, they have more than 15 million downloads, according to Google Play statistics.

[…]

And there appear to be issues with how Xiaomi is transferring the data to its servers. Though the Chinese company claimed the data was being encrypted when transferred in an attempt to protect user privacy, Cirlig found he was able to quickly see just what was being taken from his device by decoding a chunk of information that was hidden with a form of easily crackable encoding, known as base64. It took Cirlig just a few seconds to change the garbled data into readable chunks of information.

“My main concern for privacy is that the data sent to their servers can be very easily correlated with a specific user,” warned Cirlig.

[…]

But, as pointed out by Cirlig and Tierney, it wasn’t just the website or Web search that was sent to the server. Xiaomi was also collecting data about the phone, including unique numbers for identifying the specific device and Android version. Cirlig said such “metadata” could “easily be correlated with an actual human behind the screen.”

Xiaomi’s spokesperson also denied that browsing data was being recorded under incognito mode. Both Cirlig and Tierney, however, found in their independent tests that their web habits were sent off to remote servers regardless of what mode the browser was set to, providing both photos and videos as proof.

[…]

Both Cirlig and Tierney said Xiaomi’s behavior was more invasive than other browsers like Google Chrome or Apple Safari. “It’s a lot worse than any of the mainstream browsers I have seen,” Tierney said. “Many of them take analytics, but it’s about usage and crashing. Taking browser behavior, including URLs, without explicit consent and in private browsing mode, is about as bad as it gets.”

[…]

Cirlig also suspected that his app use was being monitored by Xiaomi, as every time he opened an app, a chunk of information would be sent to a remote server. Another researcher who’d tested Xiaomi devices, though was under an NDA to discuss the matter openly, said he’d seen the manufacturer’s phone collect such data. Xiaomi didn’t respond to questions on that issue.

[…]

Late in his research, Cirlig also discovered that Xiaomi’s music player app on his phone was collecting information on his listening habits: what songs were played and when.

Source: Exclusive: Warning Over Chinese Mobile Giant Xiaomi Recording Millions Of People’s ‘Private’ Web And Phone Use

It’s a bit of a puff piece, as American software also records all this data and sends it home. The article also seems to suggest that the whole phone is always sending data home, but only really talks about the browser and a music player app. So yes, you should have installed Firefox and used that as a browser as soon as you got the phone, but that goes for any phone that comes with Safari or Chrome as a browser too. A bit of anti Chinese storm in a teacup

Australian contact-tracing app leaks telling info and increases chances of third-party tracking, say security folks. That’s OK says maker, you download worse stuff as games.

The design of Australia’s COVIDSafe contact-tracing app creates some unintended surveillance opportunities, according to a group of four security pros who unpacked its .APK file.

Penned by independent security researcher Chris Culnane, University of Melbourne tutor, cryptography researcher and masters student Eleanor McMurtry, developer Robert Merkel and Australian National University associate professor and Thinking Security CEO Vanessa Teague and posted to GitHub, the analysis notes three concerning design choices.

The first-addressed is the decision to change UniqueIDs – the identifier the app shares with other users – once every two hours and for devices to only accept a new UniqueID if the app is running. The four researchers say this will make it possible for the government to understand if users are running the app.

“This means that a person who chooses to download the app, but prefers to turn it off at certain times of the day, is informing the Data Store of this choice,” they write.

The authors also suggest that persisting with a UniqueID for two hours “greatly increases the opportunities for third-party tracking.”

“The difference between 15 minutes’ and two hours’ worth of tracking opportunities is substantial. Suppose for example that the person has a home tracking device such as a Google home mini or Amazon Alexa, or even a cheap Bluetooth-enabled IoT device, which records the person’s UniqueID at home before they leave. Then consider that if the person goes to a shopping mall or other public space, every device that cooperates with their home device can share the information about where they went.”

The analysis also notes that “It is not true that all the data shared and stored by COVIDSafe is encrypted. It shares the phone’s exact model in plaintext with other users, who store it alongside the corresponding Unique ID.”

That’s worrisome as:

“The exact phone model of a person’s contacts could be extremely revealing information. Suppose for example that a person wishes to understand whether another person whose phone they have access to has visited some particular mutual acquaintance. The controlling person could read the (plaintext) logs of COVIDSafe and detect whether the phone models matched their hypothesis. This becomes even easier if there are multiple people at the same meeting. This sort of group re-identification could be possible in any situation in which one person had control over another’s phone. Although not very useful for suggesting a particular identity, it would be very valuable in confirming or refuting a theory of having met with a particular person.”

The authors also worry that the app shares all UniqueIDs when users choose to report a positive COVID-19 test.

“COVIDSafe does not give them the option of deleting or omitting some IDs before upload,” they write. “This means that users consent to an all-or-nothing communication to the authorities about their contacts. We do not see why this was necessary. If they wish to help defeat COVID-19 by notifying strangers in a train or supermarket that they may be at risk, then they also need to share with government a detailed picture of their day’s close contacts with family and friends, unless they have remembered to stop the app at those times.”

The analysis also calls out some instances of UniqueIDs persisting for up to eight hours, for unknown reasons.

The authors conclude the app is not an immediate danger to users. But they do say it presents “serious privacy problems if we consider the central authority to be an adversary.”

None of which seems to be bothering Australians, who have downloaded it more than two million times in 48 hours and blown away adoption expectations.

Atlassian co-founder Mike Cannon-Brookes may well have helped things along, by suggestingit’s time to “turn the … angry mob mode off. He also offered the following advice:

When asked by non technical people “Should I install this app? Is my data / privacy safe? Is it true it doesn’t track my location?” – say “Yes” and help them understand. Fight the misinformation. Remind them how little time they think before they download dozens of free, adware crap games that are likely far worse for their data & privacy than this ever would be!

Source: Australian contact-tracing app leaks telling info and increases chances of third-party tracking, say security folks • The Register

Why should the UK pensions watchdog be able to spy on your internet activities? Same reason as the Environment Agency and more than 50 more

It has been called the “most extreme surveillance in the history of Western democracy.” It has not once but twice been found to be illegal. It sparked the largest ever protest of senior lawyers who called it “not fit for purpose.”

And now the UK’s Investigatory Powers Act of 2016 – better known as the Snooper’s Charter – is set to expand to allow government agencies you may never have heard of to trawl through your web histories, emails, or mobile phone records.

In a memorandum [PDF] first spotted by The Guardian, the British government is asking that five more public authorities be added to the list of bodies that can access data scooped up under the nation’s mass-surveillance laws: the Civil Nuclear Constabulary, the Environment Agency, the Insolvency Service, the UK National Authority for Counter Eavesdropping (UKNACE), and the Pensions Regulator.

The memo explains why each should be given the extraordinary powers, in general and specifically. In general, the five agencies “are increasingly unable to rely on local police forces to investigate crimes on their behalf,” and so should be given direct access to the data pipe itself.

Five Whys

The Civil Nuclear Constabulary (CNC) is a special armed police force that does security at the UK’s nuclear sites and when nuclear materials are being moved. It should be given access even though “the current threat to nuclear sites in the UK is assessed as low” because “it can also be difficult to accurately assess risk without the full information needed.”

The Environment Agency investigates “over 40,000 suspected offences each year,” the memo stated. Which is why it should also be able to ask ISPs to hand over people’s most sensitive communications information, in order “to tackle serious and organised waste crime.”

The Insolvency Service investigates breaches of company director disqualification orders. Some of those it investigates get put in jail so it is essential that the service be allowed “to attribute subscribers to telephone numbers and analyse itemised billings” as well as be able to see what IP addresses are accessing specific email accounts.

UKNACE, a little known agency that we have taken a look at in the past, is home of the real-life Qs, and one of its jobs is to detect attempts to eavesdrop on UK government offices. It needs access to the nation’s communications data “in order to identify and locate an attacker or an illegal transmitting device”, the memo claimed.

And lastly, the Pensions Regulator, which checks that companies have added their employees to their pension schemes, need to be able to delve into anyone’s emails so it can “secure compliance and punish wrongdoing.”

Taken together, the requests reflect exactly what critics of the Investigatory Powers Act feared would happen: that a once-shocking power that was granted on the back of terrorism fears is being slowly extended to even the most obscure government agency for no reason other that it will make bureaucrats’ lives easier.

None of the agencies would be required to apply for warrants to access people’s internet connection data, and they would be added to another 50-plus agencies that already have access, including the Food Standards Agency, Gambling Commission, and NHS Business Services Authority.

Safeguards

One of the biggest concerns remains that there are insufficient safeguards in place to prevent the system being abused; concerns that only grow as the number of people that have access to the country’s electronic communications grows.

It is also still not known precisely how all these agencies access the data that is accumulated, or what restrictions are in place beyond a broad-brush “double lock” authorization process that requires a former judge (a judicial commissioner, or JCs) to approve a minister’s approval.

Source: Why should the UK pensions watchdog be able to spy on your internet activities? Same reason as the Environment Agency and many more • The Register

Stripe Payment Provider is Silently Recording Your Movements On its Customers’ Websites

Among startups and tech companies, Stripe seems to be the near-universal favorite for payment processing. When I needed paid subscription functionality for my new web app, Stripe felt like the natural choice. After integration, however, I discovered that Stripe’s official JavaScript library records all browsing activity on my site and reports it back to Stripe. This data includes:

  1. Every URL the user visits on my site, including pages that never display Stripe payment forms
  2. Telemetry about how the user moves their mouse cursor while browsing my site
  3. Unique identifiers that allow Stripe to correlate visitors to my site against other sites that accept payment via Stripe

This post shares what I found, who else it affects, and how you can limit Stripe’s data collection in your web applications.

Source: Stripe is Silently Recording Your Movements On its Customers’ Websites · mtlynch.io

Zoom sex party moderation: app uses machine-learning to patrol nudity – will it record them to put up on the web?

As Rolling Stone reported, the app is now playing host to virtual sex parties,  “play parties,” and group check-ins which have become, as one host said, “the mutual appreciation jerk-off society.”

According to Zoom’s “acceptable use” policy, users may not use the technology to “engage in any activity that is harmful, obscene, or indecent, particularly as such would be understood in the context of business usage.” The policy specifies that this includes “displays of nudity, violence, pornography, sexually explicit material, or criminal activity.”

Zoom says that the platform uses ‘machine learning’ to identify accounts in violation of its policies — though it has remained vague about its methods for identifying offending users and content.

“We encourage users to report suspected violations of our policies, and we use a mix of tools, including machine learning, to proactively identify accounts that may be in violation,” a spokesperson for Zoom told Rolling Stone.

While Zoom executives did not respond to the outlet’s questions about the specifics of the machine-learning tools or how the platform might be alerted to nudity and pornographic content, a spokesperson did add that the company will take a “number of actions” against people found to be in violation of the specified acceptable use.

When reached for comment, a spokesperson for Zoom referred Insider to the “acceptable use” policy as well as the platform’s privacy policy which states that Zoom “does not monitor your meetings or its contents.”

The spokesperson also pointed to Yuan’s message in which he addressed how the company has “fallen short” of users’ “privacy and security expectations,” referencing instances of harassment and Zoom-bombing, and laid out the platform’s action plan going forward.

Source: Zoom sex party moderation: app uses machine-learning to patrol nudity – Insider

It’s not unthinkable that they will record the videos and them just leave them on the web for anyone to download. After all, they’ve left thousands of video calls just lying about before.

India says ‘Zoom is a not a safe platform’ and bans government users

India has effectively banned videoconferencing service Zoom for government users and repeated warnings that consumers need to be careful when using the tool.

The nation’s Cyber Coordination Centre has issued advice (PDF) titled “Advisory on Secure use of Zoom meeting platform by private individuals (not for use by government offices/officials for official purpose)”.

The document refers to past advisories that offered advice on how to use Zoom securely and warned that Zoom has weak authentication methods. Neither of those notifications mentioned policy about government use of the tool, meaning the new document is a significant change in position!

The document is otherwise a comprehensive-if-dull guide to using Zoom securely.

[…]

Source: India says ‘Zoom is a not a safe platform’ and bans government users • The Register

Apple: We respect your privacy so much we’ve revealed a little about what we can track when you use Maps

Apple has released a set of “Mobility Trends Reports” – a trove of anonymised and aggregated data that describes how people have moved around the world in the three months from 13 January to 13 April.

The data measures walking, driving and public transport use. And as you’d expect and as depicted in the image atop this story, human movement dropped off markedly as national coronavirus lockdowns came into effect.

Apple has explained the source of the data as follows:

This data is generated by counting the number of requests made to Apple Maps for directions in select countries/regions and cities. Data that is sent from users’ devices to the Maps service is associated with random, rotating identifiers so Apple doesn’t have a profile of your movements and searches. Data availability in a particular country/region or city is subject to a number of factors, including minimum thresholds for direction requests made per day.

Apple justified the release by saying it thinks it’ll help governments understand what its citizens are up to in these viral times. The company has also said this is a limited offer – it won’t be sharing this kind of analysis once the crisis passes.

But the data is also a peek at what Apple is capable of. And presumably also what Google, Microsoft, Waze, Mapquest and other spatial services providers can do too. Let’s not even imagine what Facebook could produce. ®

Source: Apple: We respect your privacy so much we’ve revealed a little about what we can track when you use Maps • The Register

Twitter Obliterates Its Users’ Privacy Choices

The EFF’s staff technologist — also an engineer on Privacy Badger and HTTPS Everywhere, writes: Twitter greeted its users with a confusing notification this week. “The control you have over what information Twitter shares with its business partners has changed,” it said. The changes will “help Twitter continue operating as a free service,” it assured. But at what cost?

Twitter has changed what happens when users opt out of the “Allow additional information sharing with business partners” setting in the “Personalization and Data” part of its site. The changes affect two types of data sharing that Twitter does… Previously, anyone in the world could opt out of Twitter’s conversion tracking (type 1), and people in GDPR-compliant regions had to opt in. Now, people outside of Europe have lost that option. Instead, users in the U.S. and most of the rest of the world can only opt out of Twitter sharing data with Google and Facebook (type 2).
The article explains how last August Twitter discovered that its option for opting out of device-level targeting and conversion tracking “did not actually opt users out.” But after fixing that bug, “advertisers were unhappy. And Twitter announced a substantial hit to its revenue… Now, Twitter has removed the ability to opt out of conversion tracking altogether.”

While users in Europe are protected by GDPR, “users in the United States and everywhere else, who don’t have the protection of a comprehensive privacy law, are only protected by companies’ self-interest…” BoingBoing argues that Twitter “has just unilaterally obliterated all its users’ privacy choices, announcing the change with a dialog box whose only button is ‘OK.’

Source: Twitter Accused of Obliterating Its Users’ Privacy Choices – Slashdot

Mozilla installs Scheduled Telemetry Task on Windows with Firefox 75 – if you had put telemetry on

Observant Firefox users on Windows who have updated the web browser to Firefox 75 may have noticed that the upgrade brought along with it a new scheduled tasks. The scheduled task is also added if Firefox 75 is installed on a Windows device.

The task’s name is Firefox Default Browser Agent and it is set to run once per day. Mozilla published a blog post on the official blog of the organization that provides information on the task and why it has been created.

firefox default browser agent

According to Mozilla, the task has been created to help the organization “understand changes in default browser settings”. At its core, it is a Telemetry task that collects information and sends the data to Mozilla.

Here are the details:

  • The Task is only created if Telemetry is enabled. If Telemetry is set to off (in the most recently used Firefox profile), it is not created and thus no data is sent. The same is true for Enterprise telemetry policies if they are configured. Update: Some users report that the task is created while Telemetry was set to off on their machine.
  • Mozilla collects information “related to the system’s current and previous default browser setting, as w2ell as the operating system locale and version”.
  • Mozilla notes that the data cannot be “associated with regular profile based telemetry data”.
  • The data is sent to Mozilla every 24 hours using the scheduled task.

Mozilla added the file default-browser-agent.exe to the Firefox installation folder on Windows which defaults to C:\Program Files\Mozilla Firefox\.

Firefox users have the following options if they don’t want the data sent to Mozilla:

  • Firefox users who opted-out of Telemetry are good, they don’t need to make any change as the new Telemetry data is not sent to Mozilla; this applies to users who opted-out of Telemetry in Firefox or used Enterprise policies to do so.
  • Firefox users who have Telemetry enabled can either opt-out of Telemetry or deal with the task/executable that is responsible.

Disable the Firefox Default Browser Agent task

firefox-browser agent task disabled

Here is how you disable the task:

  1. Open Start on the Windows machine and type Task Scheduler.
  2. Open the Task Scheduler and go to Task Scheduler Library > Mozilla.
  3. There you should find listed the Firefox Default Browser Agent task.
  4. Right-click on the task and select Disable.
  5. Note: Nightly users may see the Firefox Nightly Default Browser Agent task there as well and may disable it.

The task won’t be executed anymore once it is disabled.

Closing Words

The new Telemetry task is only introduced on Windows and runs only if Telemetry is enabled (which it is by default [NOTE: Is it? I don’t think so! It asks at install!]). Mozilla is transparent about the introduction and while that is good, I’d preferred if the company would have informed users about it in the browser after the upgrade to Firefox 75 or installation of the browser and before the task is executed the first time.

Source: Mozilla installs Scheduled Telemetry Task on Windows with Firefox 75 – gHacks Tech News

Go  to about:telemetry in Firefox to see what it’s collecting. In my case this was none, because when FF was installed it asked me whether I wanted it on or off and I said off.

Facebook asks users about coronavirus symptoms, releases friendship data to researchers

Facebook Inc said on Monday it would start surveying some U.S. users about their health as part of a Carnegie Mellon University research project aimed at generating “heat maps” of self-reported coronavirus infections.

The social media giant will display a link at the top of users’ News Feeds directing them to the survey, which the researchers say will help them predict where medical resources are needed. Facebook said it may make surveys available to users in other countries too, if the approach is successful.

Alphabet Inc’s Google, Facebook’s rival in mobile advertising, began querying users for the Carnegie Mellon project last month through its Opinion Rewards app, which exchanges responses to surveys from Google and its clients for app store credit.

Facebook said in a blog post that the Carnegie Mellon researchers “won’t share individual survey responses with Facebook, and Facebook won’t share information about who you are with the researchers.”

The company also said it would begin making new categories of data available to epidemiologists through its Disease Prevention Maps program, which is sharing aggregated location data with partners in 40 countries working on COVID-19 response.

Researchers use the data to provide daily updates on how people are moving around in different areas to authorities in those countries, along with officials in a handful of U.S. cities and states.

In addition to location data, the company will begin making available a “social connectedness index” showing the probability that people in different locations are Facebook friends, aggregated at the zip code level.

Laura McGorman, who runs Facebook’s Data for Good program, said the index could be used to assess the economic impact of the new coronavirus, revealing which communities are most likely to get help from neighboring areas and others that may need more targeted support.

New “co-location maps” can similarly reveal the probability that people in one area will come in contact with people in another, Facebook said.

Source: Facebook asks users about coronavirus symptoms, releases friendship data to researchers – Reuters

This might actually be a good way to use all that privacy invading data

A Feature on Zoom Secretly Displayed Data From People’s LinkedIn Profiles

But what many people may not know is that, until Thursday, a data-mining feature on Zoom allowed some participants to surreptitiously have access to LinkedIn profile data about other users — without Zoom asking for their permission during the meeting or even notifying them that someone else was snooping on them.

The undisclosed data mining adds to growing concerns about Zoom’s business practices at a moment when public schools, health providers, employers, fitness trainers, prime ministers and queer dance parties are embracing the platform.

An analysis by The New York Times found that when people signed in to a meeting, Zoom’s software automatically sent their names and email addresses to a company system it used to match them with their LinkedIn profiles.

The data-mining feature was available to Zoom users who subscribed to a LinkedIn service for sales prospecting, called LinkedIn Sales Navigator. Once a Zoom user enabled the feature, that person could quickly and covertly view LinkedIn profile data — like locations, employer names and job titles — for people in the Zoom meeting by clicking on a LinkedIn icon next to their names.

The system did not simply automate the manual process of one user looking up the name of another participant on LinkedIn during a Zoom meeting. In tests conducted last week, The Times found that even when a reporter signed in to a Zoom meeting under pseudonyms — “Anonymous” and “I am not here” — the data-mining tool was able to instantly match him to his LinkedIn profile. In doing so, Zoom disclosed the reporter’s real name to another user, overriding his efforts to keep it private.

Reporters also found that Zoom automatically sent participants’ personal information to its data-mining tool even when no one in a meeting had activated it. This week, for instance, as high school students in Colorado signed in to a mandatory video meeting for a class, Zoom readied the full names and email addresses of at least six students — and their teacher — for possible use by its LinkedIn profile-matching tool, according to a Times analysis of the data traffic that Zoom sent to a student’s account.

The discoveries about Zoom’s data-mining feature echo what users have learned about the surveillance practices of other popular tech platforms over the last few years. The video-meeting platform that has offered a welcome window on American resiliency during the coronavirus — providing a virtual peek into colleagues’ living rooms, classmates’ kitchens and friends’ birthday celebrations — can reveal more about its users than they may realize.

“People don’t know this is happening, and that’s just completely unfair and deceptive,” Josh Golin, the executive director of the Campaign for a Commercial-Free Childhood, a nonprofit group in Boston, said of the data-mining feature. He added that storing the personal details of schoolchildren for nonschool purposes, without alerting them or obtaining a parent’s permission, was particularly troubling.

Source: A Feature on Zoom Secretly Displayed Data From People’s LinkedIn Profiles – The New York Times

Thousands of recorded Zoom Video Calls Left Exposed on Open Web

Thousands of personal Zoom videos have been left viewable on the open Web, highlighting the privacy risks to millions of Americans as they shift many of their personal interactions to video calls in an age of social distancing. From a report: Many of the videos appear to have been recorded through Zoom’s software and saved onto separate online storage space without a password. But because Zoom names every video recording in an identical way, a simple online search can reveal a long stream of videos that anyone can download and watch. Zoom videos are not recorded by default, though call hosts can choose to save them to Zoom servers or their own computers. There’s no indication that live-streamed videos or videos saved onto Zoom’s servers are publicly visible. But many participants in Zoom calls may be surprised to find their faces, voices and personal information exposed because a call host can record a large group call without participants’ consent.

Source: Thousands of Zoom Video Calls Left Exposed on Open Web – Slashdot

Someone Convinced Google To Delist Our Entire Right To Be Forgotten Tag In The EU For Searches On Their Name, which means we can’t tell if they are abusing the system

The very fact that the tag being delisted when searching for this unnamed individual is the “right to be forgotten” tag shows that whoever this person is, they recognize that they are not trying to cover up the record of, say, an FTC case against them from… oh, let’s just say 2003… but rather are now trying to cover up their current effort to abuse the right to be forgotten process.

Anyway, in theory (purely in theory, of course) if someone in the EU searched for the name of anyone, it might be helpful to know if the Director of the FTC’s Bureau of Consumer Protection once called him a “spam scammer” who “conned consumers in two ways.” But, apparently, in the EU, that sort of information is no longer useful. And you also can’t find out that he’s been using the right to be forgotten process to further cover his tracks. That seems unfortunate, and entirely against the supposed principle behind the “right to be forgotten.” No one is trying to violate anyone’s “privacy” here. We’re talking about public court records, and an FTC complaint and later settlement on a fairly serious crime that took place not all that long ago. That ain’t private information. And, even more to the point, the much more recent efforts by that individual to then hide all the details of this public record.

Source: Someone Convinced Google To Delist Our Entire Right To Be Forgotten Tag In The EU For Searches On Their Name | Techdirt

US Officials Use Mobile Ad Location Data to Study How COVID-19 Spreads, not cellphone tower data

Government officials across the U.S. are using location data from millions of cellphones in a bid to better understand the movements of Americans during the coronavirus pandemic and how they may be affecting the spread of the disease…

The data comes from the mobile advertising industry rather than cellphone carriers. The aim is to create a portal for federal, state and local officials that contains geolocation data in what could be as many as 500 cities across the U.S., one of the people said, to help plan the epidemic response… It shows which retail establishments, parks and other public spaces are still drawing crowds that could risk accelerating the transmission of the virus, according to people familiar with the matter… The data can also reveal general levels of compliance with stay-at-home or shelter-in-place orders, according to experts inside and outside government, and help measure the pandemic’s economic impact by revealing the drop-off in retail customers at stores, decreases in automobile miles driven and other economic metrics.

The CDC has started to get analyses based on location data through through an ad hoc coalition of tech companies and data providers — all working in conjunction with the White House and others in government, people said.

The CDC and the White House didn’t respond to requests for comment.
It’s the cellphone carriers turning over pandemic-fighting data in Germany, Austria, Spain, Belgium, the U.K., according to the article, while Israel mapped infections using its intelligence agencies’ antiterrorism phone-tracking. But so far in the U.S., “the data being used has largely been drawn from the advertising industry.

“The mobile marketing industry has billions of geographic data points on hundreds of millions of U.S. cell mobile devices…”

Source: US Officials Use Mobile Ad Location Data to Study How COVID-19 Spreads – Slashdot

I am unsure if this says more about the legality of the move or the technical decentralisation of cell phone tower data making it technically difficult to track the whole population

Israel uses anti-terrorist tech to monitor phones of virus patients

Israel has long been known for its use of technology to track the movements of Palestinian militants. Now, Prime Minister Benjamin Netanyahu wants to use similar technology to stop the movement of the coronavirus.

Netanyahu’s Cabinet on Sunday authorized the Shin Bet security agency to use its phone-snooping tactics on coronavirus patients, an official confirmed, despite concerns from civil-liberties advocates that the practice would raise serious privacy issues. The official spoke on condition of anonymity pending an official announcement.

Netanyahu announced his plan in a televised address late Saturday, telling the nation that the drastic steps would protect the public’s health, though it would also “entail a certain degree of violation of privacy.”

Israel has identified more than 200 cases of the coronavirus. Based on interviews with these patients about their movements, health officials have put out public advisories ordering tens of thousands of people who may have come into contact with them into protective home quarantine.

The new plan would use mobile-phone tracking technology to give a far more precise history of an infected person’s movements before they were diagnosed and identify people who might have been exposed.

In his address, Netanyahu acknowledged the technology had never been used on civilians. But he said the unprecedented health threat posed by the virus justified its use. For most people, the coronavirus causes only mild or moderate symptoms. But for some, especially older adults and people with existing health problems, it can cause more severe illness.

“They are not minor measures. They entail a certain degree of violation of the privacy of those same people, who we will check to see whom they came into contact with while sick and what preceded that. This is an effective tool for locating the virus,” Netanyahu said.

The proposal sparked a heated debate over the use of sensitive security technology, who would have access to the information and what exactly would be done with it.

Nitzan Horowitz, leader of the liberal opposition party Meretz, said that tracking citizens “using databases and sophisticated technological means are liable to result in a severe violation of privacy and basic civil liberties.” He said any use of the technology must be supervised, with “clear rules” for the use of the information.

Netanyahu led a series of discussions Sunday with security and health officials to discuss the matter. Responding to privacy concerns, he said late Sunday he had ordered a number of changes in the plan, including reducing the scope of data that would be gathered and limiting the number of people who could see the information, to protect against misuse.

Source: Israel takes step toward monitoring phones of virus patients – ABC News

What I’m missing is a maximum duration for these powers to be used.

Zoom Removes Code That Sends Data to Facebook – but there is still plenty of nasty stuff in there

On Friday video-conferencing software Zoom issued an update to its iOS app which stops it sending certain pieces of data to Facebook. The move comes after a Motherboard analysis of the app found it sent information such as when a user opened the app, their timezone, city, and device details to the social network giant.

When Motherboard analyzed the app, Zoom’s privacy policy did not make the data transfer to Facebook clear.

“Zoom takes its users’ privacy extremely seriously. We originally implemented the ‘Login with Facebook’ feature using the Facebook SDK in order to provide our users with another convenient way to access our platform. However, we were recently made aware that the Facebook SDK was collecting unnecessary device data,” Zoom told Motherboard in a statement on Friday.

Source: Zoom Removes Code That Sends Data to Facebook – VICE

But there is still pleny of data being hoovered up by Zoom:
Yeah, that Zoom app you’re trusting with work chatter? It lives with ‘vampires feeding on the blood of human data’

Yeah, that Zoom app you’re trusting with work chatter? It lives with ‘vampires feeding on the blood of human data’

As the global coronavirus pandemic pushes the popularity of videoconferencing app Zoom to new heights, one web veteran has sounded the alarm over its “creepily chummy” relationship with tracking-based advertisers.

Doc Searls, co-author of the influential internet marketing book The Cluetrain Manifesto last century, today warned [cached] Zoom not only has the right to extract data from its users and their meetings, it can work with Google and other ad networks to turn this personal information into targeted ads that follow them across the web.

This personal info includes, and is not limited to, names, addresses and any other identifying data, job titles and employers, Facebook profiles, and device specifications. Crucially, it also includes “the content contained in cloud recordings, and instant messages, files, whiteboards … shared while using the service.”

Searls said reports outlining how Zoom was collecting and sharing user data with advertisers, marketers, and other companies, prompted him to pore over the software maker’s privacy policy to see how it processes calls, messages, and transcripts.

And he concluded: “Zoom is in the advertising business, and in the worst end of it: the one that lives off harvested personal data.

“What makes this extra creepy is that Zoom is in a position to gather plenty of personal data, some of it very intimate (for example with a shrink talking to a patient) without anyone in the conversation knowing about it. (Unless, of course, they see an ad somewhere that looks like it was informed by a private conversation on Zoom.)”

The privacy policy, as of March 18, lumps together a lot of different types of personal information, from contact details to meeting contents, and says this info may be used, one way or another, to personalize web ads to suit your interests.

“Zoom does use certain standard advertising tools which require personal data,” the fine-print states. “We use these tools to help us improve your advertising experience (such as serving advertisements on our behalf across the internet, serving personalized ads on our website, and providing analytics services) … For example, Google may use this data to improve its advertising services for all companies who use their services.”

Searls, a former Harvard Berkman Fellow, said netizens are likely unaware their information could be harvested from their Zoom accounts and video conferences for advertising and tracking across the internet: “A person whose personal data is being shed on Zoom doesn’t know that’s happening because Zoom doesn’t tell them. There’s no red light, like the one you see when a session is being recorded.

“Nobody goes to Zoom for an ‘advertising experience,’ personalized or not. And nobody wants ads aimed at their eyeballs elsewhere on the ‘net by third parties using personal information leaked out through Zoom.”

Speaking of Zoom…

Zoom’s iOS app sent analytics data to Facebook even if you didn’t use Facebook, due to the application’s use of the social network’s Graph API, Vice discovered. The privacy policy stated the software collects profile information when a Facebook account is used to sign into Zoom, though it didn’t say anything about what happens if you don’t use Facebook. Zoom has since corrected its code to not send analytics in these circumstances.

It should go without saying but don’t share your Zoom meeting ID and password in public, such as on social media, as miscreants will spot it, hijack it, and bomb it with garbage. And don’t forget to set a strong password, too. Zoom had to beef up its meeting security after Check Point found a bunch of weaknesses, such as the fact it was easy to guess or brute-force meeting IDs.

Source: Yeah, that Zoom app you’re trusting with work chatter? It lives with ‘vampires feeding on the blood of human data’ • The Register

Android Apps Are Transmitting what other apps you have ever installed to marketing peole

At this point we’re all familiar with apps of all sorts tracking our every move and sharing that info with pretty much every third party imaginable. But it actually may not be as simple as tracking where you go and what you do in an app: It turns out that these apps might be dropping details about the other programs you’ve installed on your phone, too.

This news comes courtesy of a new paper out from a team of European researchers who found that some of the most popular apps in the Google Play store were bundled with certain bits of software that pull details of any apps that were ever downloaded onto a person’s phone.

Before you immediately chuck your Android device out the window in some combination of fear and disgust, we need to clarify a few things. First, these bits of software—called IAMs, or “installed application methods”—have some decent uses. A photography app might need to check the surrounding environment to make sure you have a camera installed somewhere on your phone. If another app immediately glitches out in the presence of an on-phone camera, knowing the environment—and the reason for that glitch—can help a developer know which part of his app to tinker with to keep that from happening in the future.

Because these IAM-specific calls are technically for debugging purposes, they generally don’t need to secure permissions the same way an app usually would when, say, asking for your location. Android devices have actually gotten better about clamping down on that form of invasive tracking after struggling with it for years, recently announcing that the Android 11 formally requiring that devs apply for location permissions access before Google grants it.

But at the same time, surveying the apps on a given phone can go the invasive route very easily: The apps we download can tip developers off about our incomes, our sexualities, and some of our deepest fears.

The research team found that, of the roughly 4,200 commercial apps it surveyed making these IAM calls, almost half were strictly grabbing details on the surrounding apps. For context, most other calls—which were for monitoring details about the app like available updates, or the current app version—together made up less than one percent of all calls they observed.

There are a few reasons for the prevalence of this errant app-sniffing behavior, but for the most part it boils down to one thing: money. A lot of these IAMs come from apps that are on-boarding software from adtech companies offering developers an easy way to make quick cash off their free product. That’s probably why the lion’s share—more than 83%—of these calls were being made on behalf of third-party code that the dev onboarded for their commercially available app, rather than code that was baked into that app by design.

And for the most part, these third parties are—as you might have suspected—companies that specialize in targeted advertising. Looking over the top 20 libraries that pull some kind of data via IAMs, some of the top contenders, like ironSource or AppNext, are in the business of getting the right ads in front of the right player at the right time, offering the developer the right price for their effort.

And because app developers—like most people in the publishing space—are often hard-up for cash, they’ll onboard these money-making tools without asking how they make that money in the first place. This kind of daisy-chaining is the same reason we see trackers of every shape and size running across every site in the modern ecosystem, at times without the people actually behind the site having any idea.

Source: Android Apps May Be Snooping on You More Than You Realize