The Linkielist

Linking ideas with the world

The Linkielist

Mozilla offers trusted VPN services – good timing!

Starting today, there’s a VPN on the market from a company you trust. The Mozilla VPN (Virtual Private Network) is now available on Windows and Android devices. This fast and easy-to-use VPN service is brought to you by Mozilla, the makers of Firefox, and a trusted name in online consumer security and privacy services.

See for yourself how the Mozilla VPN works:

 

The first thing you may notice when you install the Mozilla VPN is how fast your browsing experience is. That’s because the Mozilla VPN is based on modern and lean technology, the WireGuard protocol’s 4,000 lines of code, is a fraction in size of legacy protocols used by other VPN service providers.

You will also see an easy-to-use and simple interface for anyone who is new to VPN, or those who want to set it and get onto the web.

With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month and will initially be available in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand, with plans to expand to other countries this Fall.

Source: Mozilla Puts Its Trusted Stamp on VPN – The Mozilla Blog

Especially after 7 no logs VPN services just dumped millions of lines of logs with very very personal information in them

E.U. Court Invalidates Data-Sharing Agreement With U.S.

The European Union’s top court ruled Thursday that an agreement that allows big tech companies to transfer data to the United States is invalid, and that national regulators need to take tougher action to protect the privacy of users’ data.

The ruling does not mean an immediate halt to all data transfers outside the EU, as there is another legal mechanism that some companies can use. But it means that the scrutiny over data transfers will be ramped up and that the EU and U.S. may have to find a new system that guarantees that Europeans’ data is afforded the same privacy protection in the U.S. as it is in the EU.

The case began after former U.S. National Security Agency contractor Edward Snowden revealed in 2013 that the American government was snooping on people’s online data and communications. The revelations included detail on how Facebook gave U.S. security agencies access to the personal data of Europeans.

Austrian activist and law student Max Schrems that year filed a complaint against Facebook, which has its EU base in Ireland, arguing that personal data should not be sent to the U.S., as many companies do, because the data protection is not as strong as in Europe. The EU has some of the toughest data privacy rules under a system known as GDPR.

Source: E.U. Court Invalidates Data-Sharing Agreement With U.S. | Time

Google faces lawsuit over tracking in apps even when users opted out

Google records what people are doing on hundreds of thousands of mobile apps even when they follow the company’s recommended settings for stopping such monitoring, a lawsuit seeking class action status alleged on Tuesday.

The data privacy lawsuit is the second filed in as many months against Google by the law firm Boies Schiller Flexner on behalf a handful of individual consumers.

[…]

The new complaint in a U.S. district court in San Jose accuses Google of violating federal wiretap law and California privacy law by logging what users are looking at in news, ride-hailing and other types of apps despite them having turned off “Web & App Activity” tracking in their Google account settings.

The lawsuit alleges the data collection happens through Google’s Firebase, a set of software popular among app makers for storing data, delivering notifications and ads, and tracking glitches and clicks. Firebase typically operates inside apps invisibly to consumers.

“Even when consumers follow Google’s own instructions and turn off ‘Web & App Activity’ tracking on their ‘Privacy Controls,’ Google nevertheless continues to intercept consumers’ app usage and app browsing communications and personal information,” the lawsuit contends.

Google uses some Firebase data to improve its products and personalize ads and other content for consumers, according to the lawsuit.

Reuters reported in March that U.S. antitrust investigators are looking into whether Google has unlawfully stifled competition in advertising and other businesses by effectively making Firebase unavoidable.

In its case last month, Boies Schiller Flexner accused Google of surreptitiously recording Chrome browser users’ activity even when they activated what Google calls Incognito mode. Google said it would fight the claim.

Source: Google faces lawsuit over tracking in apps even when users opted out – Reuters

The days of “Do No Evil” are long past

Only 9% of visitors give GDPR consent to be tracked

Most GDPR consent banner implementations are deliberately engineered to be difficult to use and are full of dark patterns that are illegal according to the law.

I wanted to find out how many visitors would engage with a GDPR banner if it were implemented properly and how many would grant consent to their information being collected and shared.

[…]

If you implement a proper GDPR consent banner, a vast majority of visitors will most probably decline to give you consent. 91% to be exact out of 19,000 visitors in my study.

What’s a proper and legal implementation of a GDPR banner?

  • It’s a banner that doesn’t take much space
  • It allows people to browse your site even when ignoring the banner
  • It’s a banner that allows visitors to say “no” just as easy as they can say “yes”

[…]

Source: Only 9% of visitors give GDPR consent to be tracked

Uncovered: 1,000 phrases that incorrectly trigger Alexa, Siri, and Google Assistant

As Alexa, Google Home, Siri, and other voice assistants have become fixtures in millions of homes, privacy advocates have grown concerned that their near-constant listening to nearby conversations could pose more risk than benefit to users. New research suggests the privacy threat may be greater than previously thought.

The findings demonstrate how common it is for dialog in TV shows and other sources to produce false triggers that cause the devices to turn on, sometimes sending nearby sounds to Amazon, Apple, Google, or other manufacturers. In all, researchers uncovered more than 1,000 word sequences—including those from Game of Thrones, Modern Family, House of Cards, and news broadcasts—that incorrectly trigger the devices.

“The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans,” one of the researchers, Dorothea Kolossa, said. “Therefore, they are more likely to start up once too often rather than not at all.”

That which must not be said

Examples of words or word sequences that provide false triggers include

  • Alexa: “unacceptable,” “election,” and “a letter”
  • Google Home: “OK, cool,” and “Okay, who is reading”
  • Siri: “a city” and “hey jerry”
  • Microsoft Cortana: “Montana”

The two videos below show a GoT character saying “a letter” and Modern Family character uttering “hey Jerry” and activating Alexa and Siri, respectively.

Accidental Trigger #1 – Alexa – Cloud
Accidental Trigger #3 – Hey Siri – Cloud

In both cases, the phrases activate the device locally, where algorithms analyze the phrases; after mistakenly concluding that these are likely a wake word, the devices then send the audio to remote servers where more robust checking mechanisms also mistake the words for wake terms. In other cases, the words or phrases trick only the local wake word detection but not algorithms in the cloud.

Unacceptable privacy intrusion

When devices wake, the researchers said, they record a portion of what’s said and transmit it to the manufacturer. The audio may then be transcribed and checked by employees in an attempt to improve word recognition. The result: fragments of potentially private conversations can end up in the company logs.

The risk to privacy isn’t solely theoretical. In 2016, law enforcement authorities investigating a murder subpoenaed Amazon for Alexa data transmitted in the moments leading up to the crime. Last year, The Guardian reported that Apple employees sometimes transcribe sensitive conversations overheard by Siri. They include private discussions between doctors and patients, business deals, seemingly criminal dealings, and sexual encounters.

The research paper, titled “Unacceptable, where is my privacy?,” is the product of Lea Schönherr, Maximilian Golla, Jan Wiele, Thorsten Eisenhofer, Dorothea Kolossa, and Thorsten Holz of Ruhr University Bochum and Max Planck Institute for Security and Privacy. In a brief write-up of the findings, they wrote:

Our setup was able to identify more than 1,000 sequences that incorrectly trigger smart speakers. For example, we found that depending on the pronunciation, «Alexa» reacts to the words “unacceptable” and “election,” while «Google» often triggers to “OK, cool.” «Siri» can be fooled by “a city,” «Cortana» by “Montana,” «Computer» by “Peter,” «Amazon» by “and the zone,” and «Echo» by “tobacco.” See videos with examples of such accidental triggers here.

In our paper, we analyze a diverse set of audio sources, explore gender and language biases, and measure the reproducibility of the identified triggers. To better understand accidental triggers, we describe a method to craft them artificially. By reverse-engineering the communication channel of an Amazon Echo, we are able to provide novel insights on how commercial companies deal with such problematic triggers in practice. Finally, we analyze the privacy implications of accidental triggers and discuss potential mechanisms to improve the privacy of smart speakers.

The researchers analyzed voice assistants from Amazon, Apple, Google, Microsoft, and Deutsche Telekom, as well as three Chinese models by Xiaomi, Baidu, and Tencent. Results published on Tuesday focused on the first four. Representatives from Apple, Google, and Microsoft didn’t immediately respond to a request for comment.

The full paper hasn’t yet been published, and the researchers declined to provide a copy ahead of schedule. The general findings, however, already provide further evidence that voice assistants can intrude on users’ privacy even when people don’t think their devices are listening. For those concerned about the issue, it may make sense to keep voice assistants unplugged, turned off, or blocked from listening except when needed—or to forgo using them at all.

Source: Uncovered: 1,000 phrases that incorrectly trigger Alexa, Siri, and Google Assistant | Ars Technica

Zoom misses its own deadline to publish its first transparency report

How many government demands for user data has Zoom received? We won’t know until “later this year,” an updated Zoom blog post now says.

The video conferencing giant previously said it would release the number of government demands it has received by June 30. But the company said it’s missed that target and has given no firm new date for releasing the figures.

It comes amid heightened scrutiny of the service after a number of security issues and privacy concerns came to light following a massive spike in its user base, thanks to millions working from home because of the coronavirus pandemic.

In a blog post today reflecting on the company’s turnaround efforts, chief executive Eric Yuan said the company has “made significant progress defining the framework and approach for a transparency report that details information related to requests Zoom receives for data, records or content.”

“We look forward to providing the fiscal [second quarter] data in our first report later this year,” he said.

Transparency reports offer rare insights into the number of demands or requests a company gets from the government for user data. These reports are not mandatory, but are important to understand the scale and scope of government surveillance.

Zoom said last month it would launch its first transparency report after the company admitted it briefly suspended the Zoom accounts of two U.S.-based accounts and one Hong Kong activist at the request of the Chinese government. The users, who were not based in China, held a Zoom call commemorating the anniversary of the Tiananmen Square massacre, an event that’s cloaked in secrecy and censorship in mainland China.

Source: Zoom misses its own deadline to publish its first transparency report | TechCrunch

Consumer orgs ask world’s competition watchdogs: Are you really going to let Google walk off with all Fitbit’s data?

Twenty consumer and citizen rights groups have published an open letter [PDF] urging regulators to pay closer attention to Google parent Alphabet’s planned acquisition of Fitbit.

The letter describes the pending purchase as a “game-changer” that will test regulators’ resolve to analyse how the vast quantities of health and location data slurped by Google would affect broader market competition.

“Google could exploit Fitbit’s exceptionally valuable health and location datasets, and data collection capabilities, to strengthen its already dominant position in digital markets such as online advertising,” the group warned.

Signatories to the letter include US-based Color of Change, Center for Digital Democracy and the Omidyar Network, the Australian Privacy Foundation, and BEUC – the European Consumer Organisation.

Google confirmed its intent to acquire Fitbit for $2.1bn in November. The deal is still pending, subject to regulator approval. Google has sought the green light from the European Commission, which is expected to publish its decision on 20 July.

The EU’s executive branch can either approve the buy (with or without additional conditions) or opt to start a four-month investigation.

The US Department of Justice has also started its own investigation, requesting documents from both parties. If the deal is stopped, Google will be forced to pay a $250m termination fee to Fitbit.

Separately, the Australian Competition and Consumer Choice Commission (ACCC) has voiced concerns that the Fitbit-Google deal could have a distorting effect on the advertising market.

“Buying Fitbit will allow Google to build an even more comprehensive set of user data, further cementing its position and raising barriers to entry for potential rivals,” said ACCC chairman Rod Sims last month.

“User data available to Google has made it so valuable to advertisers that it faces only limited competition.”

The Register has asked Google and Fitbit for comment. ®

Updated at 14:06 UTC 02/07/20 to add

A Google spokesperson told The Reg: “Throughout this process we have been clear about our commitment not to use Fitbit health and wellness data for Google ads and our responsibility to provide people with choice and control with their data.

“Similar to our other products, with wearables, we will be transparent about the data we collect and why. And we do not sell personal information to anyone.”

Source: Consumer orgs ask world’s competition watchdogs: Are you really going to let Google walk off with all Fitbit’s data? • The Register

Purism’s quest against Intel’s Management Engine black box CPU now comes in 14 inches

This latest device succeeds the previous Librem 13 laptop, which ran for four generations, and includes a slightly bigger display, a hexa-core Ice Lake Intel Core i7 processor, gigabit Ethernet, and USB-C. As the name implies, the Librem 14 packs a 14-inch, 1920×1080 IPS display. Purism said this comes without increasing the laptop’s dimensions thanks to smaller bezels. You can find the full specs here.

Librem 14

Crucially, it is loaded with the usual privacy features found in Purism’s kit such as hardware kill switches that disconnect the microphone and webcam from the laptop’s circuitry. It also comes with the firm’s PureBoot tech, which includes Purism’s in-house CoreBoot BIOS replacement, and a mostly excised Intel Management Engine (IME).

The IME is a hidden coprocessor included in most of Chipzilla’s chipsets since 2008. It allows system administrators to remotely manage devices using out-of-band communications. But it’s also controversial in the security community since it’s somewhat of a black box.

There is little by way of public documentation. Intel hasn’t released the source code. And, to add insult to injury, it’s also proven vulnerable to exploitation in the past.

Source: Purism’s quest against Intel’s Management Engine black box CPU now comes in 14 inches • The Register

Facebook says 5,000 app developers got user data after Cambridge Analytica scandal cutoff date

The company said that it continued sharing user data with approximately 5,000 developers even after their application’s access expired.

The incident is related to a security control that Facebook added to its systems following the Cambridge Analytica scandal of early 2018.

Responding to criticism that it allowed app developers too much access to user information, Facebook added at the time a new mechanism to its API that prevented apps from accessing a user’s data if the user did not use the app for more than 90 days.

However, Facebook said that it recently discovered that in some instances, this safety mechanism failed to activate and allowed some apps to continue accessing user information even past the 90-day cutoff date.

[…]

“From the last several months of data we have available, we currently estimate this issue enabled approximately 5,000 developers to continue receiving [user] information,” Papamiltiadis said.

The company didn’t clarify how many users were impacted, and had their data made available to app developers even after they stopped using the app.

Source: Facebook says 5,000 app developers got user data after cutoff date | ZDNet

Talk about the fox guarding the hen house. Comcast to handle DNS-over-HTTPS for Firefox-using subscribers

Comcast has agreed to be the first home broadband internet provider to handle secure DNS-over-HTTPS queries for Firefox browser users in the US, Mozilla has announced.

This means the ISP, which has joined Moz’s Trusted Recursive Resolver (TRR) Program, will perform domain-name-to-IP-address lookups for subscribers using Firefox via encrypted HTTPS channels. That prevents network eavesdroppers from snooping on DNS queries or meddling with them to redirect connections to malicious webpages.

Last year Comcast and other broadband giants were fiercely against such safeguards, though it appears Comcast has had a change of heart – presumably when it figured it could offer DNS-over-HTTPS services as well as its plain-text DNS resolvers.

At some point in the near future, Firefox users subscribed to Comcast will use the ISP’s DNS-over-HTTPS resolvers by default, though they can opt to switch to other secure DNS providers or opt-out completely.

[…]

Incredibly, DNS-over-HTTPS was heralded as a way to prevent, among others, ISPs from snooping on and analyzing their subscribers’ web activities to target them with adverts tailored to their interests, or sell the information as a package to advertisers and industry analysts. And yet, here’s Comcast providing a DNS-over-HTTPS service for Firefox fans, allowing it to inspect and exploit their incoming queries if it so wishes. Talk about a fox guarding the hen house.

ISPs “have access to a stream of a user’s browsing history,” Marshall Erwin, senior director of trust and security at, er, Mozilla, warned in November. “This is particularly concerning in light of the rollback of the broadband privacy rules, which removed guardrails for how ISPs can use your data. The same ISPs are now fighting to prevent the deployment of DNS-over-HTTPS.”

Mozilla today insisted its new best buddy Comcast is going to play nice and follow the DNS privacy program’s rules.

Source: Talk about the fox guarding the hen house. Comcast to handle DNS-over-HTTPS for Firefox-using subscribers • The Register

tens of thousands of mobile numbers of 50+ year olds sold for whatsapp fraud

Names, adresses and mobile numbers have been sold for fraud using WhatsApp. Most of these numbers come from callcentres, mainly those selling energy contracts. The fresher a lead is, the more they are worth: betwween 25 cents and 2 euros. The money is usually transferred through mules, who keep a percentage of the proceeds.

Source: ’06-nummers van tienduizenden vijftigplussers doorverkocht voor WhatsAppfraude’ – Emerce

Microsoft Edge Accused of Sneakily Importing Firefox Data on Windows 10

In the case of Firefox users, some discovered that the new default Windows 10 browser, which is shipped to their devices via Windows Update, sometimes imports the data from Mozilla’s application even if they don’t give their permission.

Some of these Firefox users decided to kill the initial setup process of Microsoft Edge, only to discover that despite the wizard shutting down prematurely, the browser still copied data stored by Mozilla’s browser.

Several users confirmed on reddit that this behavior happened on their computers too.

Silent data importing

“Love rebooting my computer to get treated to a forced tour of a browser I’m not going to use that I have to force close through the task manager to escape, and then finding out it’s been copying over my data from Firefox without permission,” one user explains.

“Unless you close it via task manager instead of doing the forced setup, in which case it copies your data anyway, and the worst part is most people will never know what it’s doing because they’ll never open it again. I only reopened it because I noticed it automatically signed me into the browser as it was closing and wanted to sign out before not touching it again, at which point I discovered it had already copied my Firefox data over despite the fact I didn’t go through the setup process,” someone else explains.

Microsoft has remained tight-lipped on this, so for the time being, it’s still not known why Edge imports Firefox data despite the initial wizard actually killed off manually by the user.

Users who don’t want to be offered the new Edge on Windows Update can turn to the dedicated toolkit that Microsoft released earlier this year, while removing the browser is possible by just uninstalling the update from the device.

Source: Microsoft Edge Accused of Sneakily Importing Firefox Data on Windows 10

Google isn’t even trying to not be creepy: ‘Continuous Match Mode’ in Assistant will listen to everything until it’s disabled

Google has introduced “continuous match mode” for apps on its voice-powered Assistant platform, where it will listen to everything without pausing. At the same time it has debuted related developer tools, new features, and the ability to display web content on its Smart Display hardware using the AMP component framework.

The Chocolate Factory has big plans for its voice assistant. “We consider voice to be the biggest paradigm shift around us,” said director of product Baris Gultekin, speaking at the Voice Global summit, where the new features were introduced.

The goal is “ambient computing”, where you can interact with the big G anywhere at any time, so pervasively that you do not notice it. Voice interaction is a key part of this since it extends the ability to perform searches or run applications to scenarios where tapping a keyboard or touching a display are not possible.

Google Assistant exists in many guises such as on smartphones and watches, TVs, PCs, and also on dedicated hardware, such as the voice-only Google Home and Google Home Mini, or with “smart display” screens on the Google Nest Hub or devices from Lenovo and Harman. While assistant devices have been popular, Android phones (which nag you to set up the Assistant) must form the largest subset of users. Over all the device types, the company claims over 500 million active users.

[…]

Actions Builder will “replace DialogFlow as the preferred way to develop actions on the assistant,” said Shodjai.

Google's new Action Builder at work

Google’s new Action Builder at work

Trying out the new Action Builder, we discovered that running an action under development is impossible if you have the Web and App Activity permission, which lets Google keep a record of your actions, disabled. A dialog appears prompting you to enable it. It is a reminder of how Google Assistant is entwined with the notion that you give Google your data in return for personalised experiences.

[…]

“Sometimes you want to build experiences that enable the mic to remain open, to enable users to speak more naturally with your action, without waiting for a change in mic states,” said Shodjai at the summit and in the developer post.

“Today we are announcing an early access program for Continuous Match Mode, which allows the assistant to respond immediately to user’s speech enabling more natural and fluid experiences. This is done transparently, so that before the mic opens the assistant will announce, ‘the mic will stay open temporarily’, so users know they can now speak freely without waiting for additional prompts.”

The mode is not yet publicly documented. The demonstrated example was for a game with jolly cartoon pictures; but there may be privacy implications since in effect this setting lets the action continue to listen to everything while the mode is active.

Shodjai did not explain how users will end a Continuous Match Mode session but presumably this will be either after a developer-defined exit intent, or via a system intent as with existing actions. Until that happens, the action will be able to keep running.

Just as with personalisation via tracking and data collection, privacy and pervasive computing do not sit comfortably together, and with the new Continuous Match Mode a little more privacy slips away.

Source: Google isn’t even trying to not be creepy: ‘Continuous Match Mode’ in Assistant will listen to everything until it’s disabled • The Register

Zoom won’t encrypt free calls because it wants to comply with law enforcement

If you’re a free Zoom user, and waiting for the company to roll out end-to-end encryption for better protection of your calls, you’re out of luck. Free calls won’t be encrypted, and law enforcement will be able to access your information in case of ‘misuse’ of the platform.

Zoom CEO Eric Yuan today said that the video conferencing app’s upcoming end-to-end encryption feature will be available to only paid users. After announcing the company’s financial results for Q1 2020, Yuan said the firm wants to keep this feature away from free users to work with law enforcement in case of the app’s misuse:

Free users, for sure, we don’t want to give that [end-to-end encryption]. Because we also want to work it together with FBI and local law enforcement, in case some people use Zoom for bad purpose.

In the past, platforms with end-to-end encryption, such as WhatsApp, have faced heavy scrutiny in many countries because they were unable to trace the origins of problematic and misleading messages. Zoom likey wants to avoid being in such a position, and wants to comply with local laws to keep operating across the globe.

Alex Stamos, working as a security consultant with Zoom, said it wants to catch repeat offenders for hate speech or child exploitative content by not offering end-to-end encryption t0 free users.

In March, The Intercept published a report stating that the company doesn’t use end-to-end encryption, despite claiming that on its website and security white paper. Later, Zoom apologized and issued a clarification to specify it didn’t provide the feature at that time.

Last month, the company acquired Keybase.io, an encryption-based identity service, to build its end-to-end encryption offering. Yuan said today that the company got a lot of feedback from users on encryption, and it’s working out on executing it. However, he didn’t specify a release date for the feature.

According to the Q1 2020 results, the company grew 169% year-on-year in terms of revenue. Zoom has more than 300 million daily participants attending meetings through the platform.

Source: Zoom won’t encrypt free calls because it wants to comply with law enforcement

GSMA suggests mobile carriers bake contact-tracing into their own apps – if governments ask for it

The GSM Association, the body that represents mobile carriers and influences the development of standards, has suggested its members bake virus contact-tracing functionality into their own bundled software.

The body today popped out a paper [PDF] on contact-tracing apps. After some unremarkable observations about the need for and operations of such apps, plus an explanation of the centralised vs. centralised data storage debate, the paper offers members a section titled: “How the mobile industry can help.”

That section suggests carriers could help to improve the reach of and disseminate such apps with the following three tactics:

  • Integrate software into own apps (e.g. customer self-care app), if this is part of the national strategy
  • Pre-install on devices
  • Communicate to / educate subscribers

The first item may prove unworkable given Google and Apple have indicated they’ll only register coronavirus-related apps if they’re developed by governments and their health agencies. The two tech giants have also said they’ll only allow one app per jurisdiction to use their pro-privacy COVID-19 contact-tracing interface. The second suggestion also has potential pitfalls as contact-tracing apps are generally opt-in affairs. Carriers would need to be sensitive about how they are installed and the user experience offered if the apps ask for registration.

Source: GSMA suggests mobile carriers bake contact-tracing into their own apps – if governments ask for it • The Register

Qatar’s contact tracing app put over one million people’s info at risk

Contact tracing apps have the potential to slow the spread of COVID-19. But without proper security safeguards, some fear they could put users’ data and sensitive info at risk. Until now, that threat has been theoretical. Today, Amnesty International reports that a flaw in Qatar’s contact tracing app put the personal information of more than one million people at risk.

The flaw, now fixed, made info like names, national IDs, health status and location data vulnerable to cyberattacks. Amnesty’s Security Lab discovered the flaw on May 21st and says authorities fixed it on May 22nd. The vulnerability had to do with QR codes that included sensitive info. The update stripped some of that data from the QR codes and added a new layer of authentication to prevent foul play.

Qatar’s app, called EHTERAZ, uses GPS and Bluetooth to track COVID-19 cases, and last week, authorities made it mandatory. According to Amnesty, people who don’t use the app could face up to three years in prison and a fine of QR 200,000 (about $55,000).

“This incident should act as a warning to governments around the world rushing out contact tracing apps that are too often poorly designed and lack privacy safeguards. If technology is to play an effective role in tackling the virus, people need to have confidence that contact tracing apps will protect their privacy and other human rights,” said Claudio Guarnieri, head of Amnesty International’s Security Lab.

Source: Qatar’s contact tracing app put over one million people’s info at risk | Engadget

Hey Siri, are you still recording people’s conversations despite promising not to do so nine months ago?

Apple may still be recording and transcribing conversations captured by Siri on its phones, despite promising to put an end to the practice nine months ago, claims a former Apple contractor who was hired to listen into customer conversations.

In a letter [PDF] sent to data protection authorities in Europe, Thomas Le Bonniec expresses his frustration that, despite exposing in April 2019 that Apple has hired hundreds of people to analyze recordings that its users were unaware had been made, nothing appears to have changed.

Those recordings were captured by Apple’s Siri digital assistant, which constantly listens out for potential voice commands to obey. The audio was passed to human workers to transcribe, label, and analyze to improve Siri’s neural networks that process what people say. Any time Siri heard something it couldn’t understand – be it a command or someone’s private conversation or an intimate moment – it would send a copy of the audio to the mothership for processing so that it could be retrained to do better next time.

Le Bonniec worked for Apple subcontractor Globe Technical Services in Ireland for two months, performing this manual analysis of audio recorded by Siri, and witnessed what he says was a “massive violation of the privacy of millions of citizens.”

“All over the world, people had their private life recorded by Apple up to the most intimate and sensitive details,” he explained. “Enormous amounts of personal data were collected, stored and analyzed by Apple in an opaque way. These practices are clearly at odds with the company’s privacy-driven policies and should be urgently investigated by Data Protection Authorities and Privacy watchdogs.”

But despite the fact that Apple acknowledged it was in fact transcribing and tagging huge numbers of conversations that users were unaware had been recorded by their Macs and iOS devices, promised a “thorough review of our practices and policies,” and apologized that it hadn’t “been fully living up to our high ideals,” Le Bonniec says nothing has changed.

“Nothing has been done to verify if Apple actually stopped the programme. Some sources already confirmed to me that Apple has not,” he said.

“I believe that Apple’s statements merely aim to reassure their users and public authorities, and they do not care for their user’s consent, unless being forced to obtain it by law,” says the letter. “It is worrying that Apple (and undoubtedly not just Apple) keeps ignoring and violating fundamental rights and continues their massive collection of data.”

In effect, he argues, “big tech companies are basically wiretapping entire populations despite European citizens being told the EU has one of the strongest data protection laws in the world. Passing a law is not good enough: it needs to be enforced upon privacy offenders.”

Not good

How bad is the situation? According to Le Bonniec: “I listened to hundreds of recordings every day, from various Apple devices (e.g. iPhones, Apple Watches, or iPads). These recordings were often taken outside of any activation of Siri, e.g. in the context of an actual intention from the user to activate it for a request.

“These processings were made without users being aware of it, and were gathered into datasets to correct the transcription of the recording made by the device. The recordings were not limited to the users of Apple devices, but also involved relatives, children, friends, colleagues, and whoever could be recorded by the device.

“The system recorded everything: names, addresses, messages, searches, arguments, background noises, films, and conversations. I heard people talking about their cancer, referring to dead relatives, religion, sexuality, pornography, politics, school, relationships, or drugs with no intention to activate Siri whatsoever.”

So, pretty bad.

Source: Hey Siri, are you still recording people’s conversations despite promising not to do so nine months ago? • The Register

Senate Votes to Allow FBI to Look at US citizen Web Browsing History Without a Warrant

The US Senate has voted to give law enforcement agencies access to web browsing data without a warrant, dramatically expanding the government’s surveillance powers in the midst of the COVID-19 pandemic.

The power grab was led by Senate majority leader Mitch McConnell as part of a reauthorization of the Patriot Act, which gives federal agencies broad domestic surveillance powers. Sens. Ron Wyden (D-OR) and Steve Daines (R-MT) attempted to remove the expanded powers from the bill with a bipartisan amendment.

But in a shock upset, the privacy-preserving amendment fell short by a single vote after several senators who would have voted “Yes” failed to show up to the session, including Bernie Sanders. 9 Democratic senators also voted “No,” causing the amendment to fall short of the 60-vote threshold it needed to pass.

“The Patriot Act should be repealed in its entirety, set on fire and buried in the ground,” Evan Greer, the deputy director of Fight For The Future, told Motherboard. “It’s one of the worst laws passed in the last century, and there is zero evidence that the mass surveillance programs it enables have ever saved a single human life.”

Source: Senate Votes to Allow FBI to Look at Your Web Browsing History Without a Warrant – VICE

Privacy Enhancements for Android

Privacy Enhancements for Android (PE for Android) is a platform for exploring concepts in regulating access to private information on mobile devices. The goal is to create an extensible privacy system that abstracts away the details of various privacy-preserving technologies. PE for Android allows app developers to safely leverage state-of-the-art privacy techniques without knowledge of esoteric underlying technologies. Further, PE for Android helps users to take ownership of their private information by presenting them with more intuitive controls and permission enforcement. The platform was developed as a fork of the Android Open Source Project (AOSP) release for Android 9 “Pie” and can be installed as a Generic System Image (GSI) on a Project Treble-compliant device.

Source: Privacy Enhancements for Android

Under DARPA’s Brandeis program, a team of researchers led by Two Six Labs and Raytheon BBN Technologies have developed a platform called Privacy Enhancements for Android (PE for Android) to explore more expressive concepts in regulating access to private information on mobile devices. PE for Android seeks to create an extensible privacy system that abstracts away the details of various privacy-preserving technologies, allowing application developers to utilize state-of-the-art privacy techniques, such as secure multi-party computation and differential privacy, without knowledge of their underlying esoteric technologies. Importantly, PE for Android allows mobile device users to take ownership of their private information by presenting them with more intuitive controls and permission enforcement options.

Source: Researchers on DARPA’s Brandeis Program Enhance Privacy Protections for Android Applications

No cookie consent walls — and no, scrolling isn’t consent, says EU data protection body

You can’t make access to your website’s content dependent on a visitor agreeing that you can process their data — aka a ‘consent cookie wall’. Not if you need to be compliant with European data protection law.

That’s the unambiguous message from the European Data Protection Board (EDPB), which has published updated guidelines on the rules around online consent to process people’s data.

Under pan-EU law, consent is one of six lawful bases that data controllers can use when processing people’s personal data.

But in order for consent to be legally valid under Europe’s General Data Protection Regulation (GDPR) there are specific standards to meet: It must be clear and informed, specific and freely given.

Hence cookie walls that demand ‘consent’ as the price for getting inside the club are not only an oxymoron but run into a legal brick wall.

No consent behind a cookie wall

The regional cookie wall has been crumbling for some time, as we reported last year — when the Dutch DPA clarified its guidance to ban cookie walls.

The updated guidelines from the EDPB look intended to hammer the point home. The steering body’s role is to provide guidance to national data protection agencies to encourage a more consistent application of data protection rules.

The EDPB’s intervention should — should! — remove any inconsistencies of interpretation on the updated points by national agencies of the bloc’s 27 Member States. (Though compliance with EU data protection law tends to be a process; aka it’s a marathon not a sprint, though on the cookie wall issues the ‘runners’ have been going around the tracks for a considerable time now.)

As we noted in our report on the Dutch clarification last year, the Internet Advertising Bureau Europe was operating a full cookie wall — instructing visitors to ‘agree’ to its data processing terms if they wished to view the content.

The problem that we pointed out is that that wasn’t a free choice. Yet EU law requires a free choice for consent to be legally valid. So it’s interesting to note the IAB Europe has, at some point since, updated its cookie consent implementation — removing the cookie wall and offering a fairly clear (if nudged) choice to visitors to either accept or deny cookies for “aggregated statistics”…

As we said at the time the writing was on the wall for consent cookie walls.

The EDPB document includes the below example to illustrate the salient point that consent cookie walls do not “constitute valid consent, as the provision of the service relies on the data subject clicking the ‘Accept cookies’ button. It is not presented with a genuine choice.”

It’s hard to get clearer than that, really.

Scrolling never means ‘take my data’

A second area to get attention in the updated guidance, as a result of the EDPB deciding there was a need for additional clarification, is the issue of scrolling and consent.

Simply put: Scrolling on a website or digital service can not — in any way — be interpreted as consent.

Or, as the EDPB puts it, “actions such as scrolling or swiping through a webpage or similar user activity will not under any circumstances satisfy the requirement of a clear and affirmative action” [emphasis ours].

Source: No cookie consent walls — and no, scrolling isn’t consent, says EU data protection body | TechCrunch

IAB Europe Guide to the Post Third-Party Cookie Era

This Guide has been developed by experts from IAB Europe’s Programmatic Trading Committee (PTC) to prepare brands, agencies, publishers and tech intermediaries for the much-anticipated post third-party cookie advertising ecosystem.

It provides background to the current use of cookies in digital advertising today and an overview of the alternative solutions being developed. As solutions evolve, the PTC will be updating this Guide on a regular basis to provide the latest information and guidance on market alternatives to third-party cookies.

The Guide, available below as an e-book or PDF, helps to answer to the following questions:

  • What factors have contributed to the depletion of the third-party cookie?
  • How will the depletion of third-party cookies impact stakeholders and the wider industry including proprietary platforms?
  • How will the absence of third-party cookies affect the execution of digital advertising campaigns?
  • What solutions currently exist to replace the usage of third-party cookies?
  • What industry solutions are currently being developed and by whom?
  • How can I get involved in contributing to the different solutions?

Source: IAB Europe Guide to the Post Third-Party Cookie Era – IAB Europe

Yup, advertisers won’t be able to track you over the internet using 3rd party cookies anymore soon

Researchers create a new system to protect users’ online data by checking if data entered is consistent with the privacy policy

Researchers have created a new a new system that helps Internet users ensure their online data is secure.

The software-based system, called Mitigator, includes a plugin users can install in their browser that will give them a secure signal when they visit a website verified to process its data in compliance with the site’s privacy policy.

“Privacy policies are really hard to read and understand,” said Miti Mazmudar, a PhD candidate in Waterloo’s David R. Cheriton School of Computer Science. “What we try to do is have a compliance system that takes a simplified model of the privacy policy and checks the code on the website’s end to see if it does what the privacy policy claims to do.

“If a website requires you to enter your email address, Mitigator will notify you if the privacy policy stated that this wouldn’t be needed or if the privacy policy did not mention the requirement at all.”

Mitigator can work on any computer, but the companies that own the website servers must have machines with a trusted execution environment (TEE). TEE, a secure area of modern server-class processors, guarantees the protection of code and data loaded in it with respect to confidentiality and integrity.

“The big difference between Mitigator and prior systems that had similar goals is that Mitigator’s primary focus is on the signal it gives to the user,” said Ian Goldberg, a professor in Waterloo’s Faculty of Mathematics. “The important thing is not just that the company knows their software is running correctly; we want the user to get this assurance that the company’s software is running correctly and is processing their data properly and not just leaving it lying around on disk to be stolen.

“Users of Mitigator will know whether their data is being properly protected, managed, and processed while the companies will benefit in that their customers are happier and more confident that nothing untoward is being done with their data.”

The study, Mitigator: Privacy policy compliance using trusted hardware, authored by Mazmudar and Goldberg, has been accepted for publication in the Proceedings of Privacy Enhancing Technologies.

Source: Researchers create a new system to protect users’ online data | Waterloo Stories | University of Waterloo

UK COVID-19 contact tracing app data may be kept for ‘research’ after crisis ends, MPs told

Britons will not be able to ask NHS admins to delete their COVID-19 tracking data from government servers, digital arm NHSX’s chief exec Matthew Gould admitted to MPs this afternoon.

Gould also told Parliament’s Human Rights Committee that data harvested from Britons through NHSX’s COVID-19 contact tracing app would be “pseudonymised” – and appeared to leave the door open for that data to be sold on for “research”.

The government’s contact-tracing app will be rolled out in Britain this week. A demo seen by The Register showed its basic consumer-facing functions. Key to those is a big green button that the user presses to send 28 days’ worth of contact data to the NHS.

Screenshot of the NHSX covid-19 contact tracing app

Screenshot of the NHSX COVID-19 contact tracing app … Click to enlarge

Written by tech arm NHSX, Britain’s contact-tracing app breaks with international convention by opting for a centralised model of data collection, rather than keeping data on users’ phones and only storing it locally.

In response to questions from Scottish Nationalist MP Joanna Cherry this afternoon, Gould told MPs: “The data can be deleted for as long as it’s on your own device. Once uploaded all the data will be deleted or fully anonymised with the law, so it can be used for research purposes.”

Source: UK COVID-19 contact tracing app data may be kept for ‘research’ after crisis ends, MPs told • The Register

New Firefox service will generate unique email aliases to enter in online forms

Browser maker Mozilla is working on a new service called Private Relay that generates unique aliases to hide a user’s email address from advertisers and spam operators when filling in online forms.

The service entered testing last month and is currently in a closed beta, with a public beta currently scheduled for later this year, ZDNet has learned.

Private Relay will be available as a Firefox add-on that lets users generate a unique email address — an email alias — with one click.

The user can then enter this email address in web forms to send contact requests, subscribe to newsletters, and register new accounts.

“We will forward emails from the alias to your real inbox,” Mozilla says on the Firefox Private Relay website.

“If any alias starts to receive emails you don’t want, you can disable it or delete it completely,” the browser maker said.

The concept of an email alias has existed for decades, but managing them has always been a chore, or email providers didn’t allow users access to such a feature.

Through Firefox Private Relay, Mozilla hopes to provide an easy to use solution that can let users create and destroy email aliases with a few button clicks.

Source: New Firefox service will generate unique email aliases to enter in online forms | ZDNet

Brave accuses European governments of GDPR resourcing failure

Brave, a maker of a pro-privacy browser, has lodged complaints with the European Commission against 27 EU Member States for under resourcing their national data protection watchdogs.

It’s asking the European Union’s executive body to launch an infringement procedure against Member State governments, and even refer them to the bloc’s top court, the European Court of Justice, if necessary.

“Article 52(4) of the GPDR [General Data Protection Regulation] requires that national governments give DPAs the human and financial resources necessary to perform their tasks,” it notes in a press release.

Brave has compiled a report to back up the complaints — in which it chronicles a drastic shortage of tech expertise and budget resource among Europe’s privacy agencies to enforce the region’s data protection framework.

Lack of proper resource to ensure the regulation’s teeth are able to clamp down on bad behavior — as the law drafters’ intended — has been a long standing concern.

In the Irish data watchdog’s annual report in February — AKA the agency that regulates most of big tech in Europe — the lack of any decisions in major cross-border cases against a roll-call of tech giants loomed large, despite plenty of worthy filler, with reams of stats included to illustrate the massive case load of complaints the agency is now dealing with.

Ireland’s decelerating budget and headcount in the face of rising numbers of GDPR complaints is a key concern highlighted by Brave’s report.

Per the report, half of EU data protection agencies have what it dubs a small budget (sub €5M), while only five of Europe’s 28 national GDPR enforcers have more than 10 “tech specialists”, as it describes them.

“Almost a third of the EU’s tech specialists work for one of Germany’s Länder (regional) or federal DPAs,” it warns. “All other EU countries are far behind Germany.”

“Europe’s GDPR enforcers do not have the capacity to investigate Big Tech,” is its top-line conclusion.

“If the GDPR is at risk of failing, the fault lies with national governments, not with the data protection authorities,” said Dr Johnny Ryan, Brave’s chief policy & industry relations officer, in a statement. “Robust, adversarial enforcement is essential. GDPR enforcers must be able to properly investigate ‘big tech’, and act without fear of vexatious appeals. But the national governments of European countries have not given them the resources to do so. The European Commission must intervene.”

It’s worth noting that Brave is not without its own commercial interest here. It absolutely has skin in the game, as a provider of privacy-sensitive adtech.

[…]

Source: Brave accuses European governments of GDPR resourcing failure | TechCrunch