A handful of Chrome users have sued Google, accusing the browser maker of collecting personal information despite their decision not to sync data stored in Chrome with a Google Account.
The lawsuit [PDF], filed on Monday in a US federal district court in San Jose, California, claimed Google promises not to collect personal information from Chrome users who choose not to sync their browser data with a Google Account but does so anyway.
“Google intentionally and unlawfully causes Chrome to record and send users’ personal information to Google regardless of whether a user elects to Sync or even has a Google account,” the complaint stated.
Filed on behalf of “unsynced” plaintiffs Patrick Calhoun, Elaine Crespo, Hadiyah Jackson and Claudia Kindler – all said to have stopped using Chrome and to wish to return to it, rather than use a different browser, once Google stops tracking unsynced users – the lawsuit cited the Chrome Privacy Notice.
Since 2016, that notice has promised, “You don’t need to provide any personal information to use Chrome.” And since 2019, it has said, “the personal information that Chrome stores won’t be sent to Google unless you choose to store that data in your Google Account by turning on sync,” with earlier versions offering variants on that wording.
Nonetheless, whether or not account synchronization has been enabled, it’s claimed, Google uses Chrome to collect IP addresses linked to user agent data, identifying cookies, unique browser identifiers called X-Client Data Headers, and browsing history. And it does so supposedly in violation of federal wiretap laws and state statutes.
Google then links that information with individuals and their devices, it’s claimed, through practices like cookie syncing, where cookies set in a third-party context get associated with cookies set in a first-party context.
“Cookie synching allows cooperating websites to learn each other’s cookie identification numbers for the same user,” the complaint says. “Once the cookie synching operation is complete, the two websites exchange information that they have collected and hold about a user, further making these cookies ‘Personal Information.'”
The litigants pointed to Google’s plan to phase out third-party cookies, and noted Google doesn’t need cookies due to the ability of its X-Client-Data Header to uniquely identify people.
Twitter contractors with high-level administrative access to accounts regularly abused their privileges to spy on celebrities including Beyoncé, including approximating their movements via internet protocol addresses, according to a report by Bloomberg.
Over 1,500 workers and contractors at Twitter who handle internal support requests and manage user accounts have high-level privileges that enable them to override user security settings and reset their accounts via Twitter’s backend, as well as view certain details of accounts like IP addresses, phone numbers, and email addresses.
[…]
Two of the former Twitter employees told Bloomberg that projects such as enhancing security of “the system that houses Twitter’s backup files or enhancing oversight of the system used to monitor contractor activity were, at times, shelved for engineering products designed to enhance revenue.” In the meantime, some of those with access (some of whom were contractors with Cognizant at up to six separate work sites) abused it to view details including IP addresses of users. Executives didn’t prioritize policing the internal support team, two of the former employees told Bloomberg, and at times Twitter security allegedly had trouble tracking misconduct due to sheer volume.
A system was in place to create access logs, but it could be fooled by simply creating bullshit support tickets that made the spying appear legitimate; two of the former employees told Bloomberg that from 2017 to 2018 members of the internal support team “made a kind of game out of” the workaround. The security risks inherent to granting access to so many people were reportedly brought up to the company’s board repeatedly from 2015-2019, but little changed.
This had consequences beyond the most recent hack. Last year, the Department of Justice announced charges against two former employees (a U.S. national and a Saudi citizen) that it accused of espionage on behalf of an individual close to Saudi Crown Prince Mohammed bin Salman. The DOJ alleged that the intent of the operation was to gain access to private information on political dissidents.
The EU has demanded that Google make major concessions relating to its $2.1 billion acquisition of fitness-tracking company Fitbit if the deal is to be allowed to proceed imminently, according to people with direct knowledge of the discussions.
Since it was announced last November, the acquisition has faced steep opposition from consumer groups and regulators, who have raised concerns over the effect of Google’s access to Fitbit’s health data on competition.
EU regulators now want the company to pledge that it will not use that information to “further enhance its search advantage” and that it will grant third parties equal access to it, these people said.
The move comes days after the EU regulators suffered a major blow in Luxembourg, losing a landmark case that would have forced Apple to pay back €14.3 billion in taxes to Ireland.
Brussels insiders said that a refusal by Google to comply with the new demands would probably result in a protracted investigation, adding that such a scenario could ultimately leave the EU at a disadvantage.
“It is like a poker game,” said a person following the case closely. “In a lengthy probe, the commission risks having fewer or no pledges and still having to clear the deal.”
They added that the discussions over the acquisition were “intense,” and there was no guarantee that any agreement between Brussels and Google would be reached.
Google had previously promised it would not use Fitbit’s health data to improve its own advertising, but according to Brussels insiders, the commitment was not sufficient to assuage the EU’s concerns nor those of US regulators also examining the deal.
Apple’s iOS 14 beta has proven surprisingly handy at sussing out what apps are snooping on your phone’s data. It ratted out LinkedIn, Reddit, and TikTok for secretly copying clipboard content earlier this month, and now Instagram’s in hot water after several users reported that their camera’s “in use” indicator stays on even when they’re just scrolling through their Instagram feed.
According to reportsshared on social media by users with the iOS 14 beta installed, the green “camera on” indicator would pop up when they used the app even when they weren’t taking photos or recording videos. If this sounds like deja vu, that’s because Instagram’s parent company, Facebook, had to fix a similar issue with its iOS app last year when users found their device’s camera would quietly activate in the background without their permission while using Facebook.
In an interview with the Verge, an Instagram spokesperson called this issue a bug that the company’s currently working to patch.
[…]
Even though iOS 14 is still in beta mode and its privacy features aren’t yet available to the general public, it’s already raised plenty of red flags about apps snooping on your data. Though TikTok, LinkedIn, and Reddit may have been the most high-profile examples, researchers Talal Haj Bakry and Tommy Mysk found more than 50 iOS apps quietly accessing users’ clipboards as well. And while there are certainly more malicious breaches of privacy, these kinds of discoveries are a worrying reminder about how much we risk every time we go online.
Facebook has agreed to pay a total of $650 million in a landmark class action lawsuit over the company’s unauthorized use of facial recognition, a new court filing shows.
The filing represents a revised settlement that increases the total payout by $100 million and comes after a federal judge balked at the original proposal on the grounds it did not adequately punish Facebook.
The settlement covers any Facebook user in Illinois whose picture appeared on the site after 2011. According to the new document, those users can each expect to receive between $200 and $400 depending on how many people file a claim.
The case represents one of the biggest payouts for privacy violations to date, and contrasts sharply with other settlements such as that for the notorious data breach at Equifax—for which victims are expected to received almost nothing.
The Facebook lawsuit came about as a result of a unique state law in Illinois, which obliges companies to get permission before using facial recognition technology on their customers.
The law has ensnared not just Facebook, but also the likes of Google and photo service Shutterfly. The companies had insisted in court that the law did not apply to their activities, and lobbied the Illinois legislature to rule they were exempt, but these efforts fell short.
The final Facebook settlement is likely to be approved later this year, meaning Illinois residents will be poised to collect a payout in 2021.
The judge overseeing the settlement rejected the initial proposal in June on the grounds that the Illinois law provides penalties of $5,000, meaning Facebook could have been obliged to pay $47 billion—an amount far exceeding what the company agreed to pay under the settlement.
“We are focused on settling as it is in the best interest of our community and our shareholders to move past this matter,” said a Facebook spokesperson.
Edelson PC, the law firm representing the plaintiffs, declined to comment on the revised deal.
Amazon claims it reviews the software created by third-party developers for its Alexa voice assistant platform, yet US academics were able to create more than 200 policy-violating Alexa Skills and get them certified.
In a paper [PDF] presented at the US Federal Trade Commission’s PrivacyCon 2020 event this week, Clemson University researchers Long Cheng, Christin Wilson, Song Liao, Jeffrey Alan Young, Daniel Dong, and Hongxin Hu describe the ineffectiveness of Amazon’s Skills approval process.
The researchers have also set up a website to present their findings.
Like Android and iOS apps, Alexa Skills have to be submitted for review before they’re available to be used with Amazon’s Alexa service. Also like Android and iOS, the Amazon’s review process sometimes misses rule-breaking code.
In the researchers’ test, sometimes was every time: The e-commerce giant’s review system granted approval for every one of 234 rule-flouting Skills submitted over a 12-month period.
“Surprisingly, the certification process is not implemented in a proper and effective manner, as opposed to what is claimed that ‘policy-violating skills will be rejected or suspended,'” the paper says. “Second, vulnerable skills exist in Amazon’s skills store, and thus users (children, in particular) are at risk when using [voice assistant] services.”
Amazon disputes some of the findings and suggests that the way the research was done skewed the results by removing rule-breaking Skills after certification, but before other systems like post-certification audits might have caught the offending voice assistant code.
The devil is in the details
Alexa hardware has been hijacked by security researchers for eavesdropping and the software on these devices poses similar security risks, but the research paper concerns itself specifically with content in Alexa Skills that violates Amazon’s rules.
Alexa content prohibitions include limitations on activities like collecting information from children, collecting health information, sexually explicit content, descriptions of graphic violence, self-harm instructions, references to Nazis or hate symbols, hate speech, the promotion drugs, terrorism, or other illegal activities, and so on.
Getting around these rules involved tactics like adding a counter to Skill code, so the app only starts spewing hate speech after several sessions. The paper cites a range of problems with the way Amazon reviews Skills, including inconsistencies where rejected content gets accepted after resubmission, vetting tools that can’t recognize cloned code submitted by multiple developer accounts, excessive trust in developers, and negligence in spotting data harvesting even when the violations are made obvious.
Amazon also does not require developers to re-certify their Skills if the backend code – run on developers’ servers – changes. It’s thus possible for Skills to turn malicious if the developer alters the backend code or an attacker compromises a well-intentioned developer’s server.
As part of the project, the researchers also examined 825 published Skills for kids that either had a privacy policy or a negative review. Among these, 52 had policy violations. Negative comments by users mention unexpected advertisements, inappropriate language, and efforts to collect personal information.
Mozilla says it’s working on fixing a bug in Firefox for Android that keeps the smartphone camera active even after users have moved the browser in the background or the phone screen was locked.
A Mozilla spokesperson told ZDNet in an email this week that a fix is expected for later this year in October.
The bug was first spotted and reported to Mozilla a year ago, in July 2019, by an employee of video delivery platform Appear TV.
The bug manifests when users chose to video stream from a website loaded in Firefox instead of a native app.
Mobile users often choose to stream from a mobile browser for privacy reasons, such as not wanting to install an intrusive app and grant it unfettered access to their smartphone’s data. Mobile browsers are better because they prevent websites from accessing smartphone data, keeping their data collection to a minimum.
The Appear TV developer noticed that Firefox video streams kept going, even in situations when they should have normally stopped.
While this raises issues with streams continuing to consume the user’s bandwidth, the bug was also deemed a major privacy issue as Firefox would continue to stream from the user’s device in situations where the user expected privacy by switching to another app or locking the device.
“From our analysis, a website is allowed to retain access to your camera or microphone whilst you’re using other apps, or even if the phone is locked,” a spokesperson for Traced, a privacy app, told ZDNet, after alerting us to the issue.
“While there are times you might want the microphone or video to keep working in the background, your camera should never record you when your phone is locked,” Traced added.
Starting today, there’s a VPN on the market from a company you trust. The Mozilla VPN (Virtual Private Network) is now available on Windows and Android devices. This fast and easy-to-use VPN service is brought to you by Mozilla, the makers of Firefox, and a trusted name in online consumer security and privacy services.
See for yourself how the Mozilla VPN works:
The first thing you may notice when you install the Mozilla VPN is how fast your browsing experience is. That’s because the Mozilla VPN is based on modern and lean technology, the WireGuard protocol’s 4,000 lines of code, is a fraction in size of legacy protocols used by other VPN service providers.
You will also see an easy-to-use and simple interface for anyone who is new to VPN, or those who want to set it and get onto the web.
With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month and will initially be available in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand, with plans to expand to other countries this Fall.
The European Union’s top court ruled Thursday that an agreement that allows big tech companies to transfer data to the United States is invalid, and that national regulators need to take tougher action to protect the privacy of users’ data.
The ruling does not mean an immediate halt to all data transfers outside the EU, as there is another legal mechanism that some companies can use. But it means that the scrutiny over data transfers will be ramped up and that the EU and U.S. may have to find a new system that guarantees that Europeans’ data is afforded the same privacy protection in the U.S. as it is in the EU.
The case began after former U.S. National Security Agency contractor Edward Snowden revealed in 2013 that the American government was snooping on people’s online data and communications. The revelations included detail on how Facebook gave U.S. security agencies access to the personal data of Europeans.
Austrian activist and law student Max Schrems that year filed a complaint against Facebook, which has its EU base in Ireland, arguing that personal data should not be sent to the U.S., as many companies do, because the data protection is not as strong as in Europe. The EU has some of the toughest data privacy rules under a system known as GDPR.
Google records what people are doing on hundreds of thousands of mobile apps even when they follow the company’s recommended settings for stopping such monitoring, a lawsuit seeking class action status alleged on Tuesday.
The data privacy lawsuit is the second filed in as many months against Google by the law firm Boies Schiller Flexner on behalf a handful of individual consumers.
[…]
The new complaint in a U.S. district court in San Jose accuses Google of violating federal wiretap law and California privacy law by logging what users are looking at in news, ride-hailing and other types of apps despite them having turned off “Web & App Activity” tracking in their Google account settings.
The lawsuit alleges the data collection happens through Google’s Firebase, a set of software popular among app makers for storing data, delivering notifications and ads, and tracking glitches and clicks. Firebase typically operates inside apps invisibly to consumers.
“Even when consumers follow Google’s own instructions and turn off ‘Web & App Activity’ tracking on their ‘Privacy Controls,’ Google nevertheless continues to intercept consumers’ app usage and app browsing communications and personal information,” the lawsuit contends.
Google uses some Firebase data to improve its products and personalize ads and other content for consumers, according to the lawsuit.
Reuters reported in March that U.S. antitrust investigators are looking into whether Google has unlawfully stifled competition in advertising and other businesses by effectively making Firebase unavoidable.
In its case last month, Boies Schiller Flexner accused Google of surreptitiously recording Chrome browser users’ activity even when they activated what Google calls Incognito mode. Google said it would fight the claim.
Most GDPR consent banner implementations are deliberately engineered to be difficult to use and are full of dark patterns that are illegal according to the law.
I wanted to find out how many visitors would engage with a GDPR banner if it were implemented properly and how many would grant consent to their information being collected and shared.
[…]
If you implement a proper GDPR consent banner, a vast majority of visitors will most probably decline to give you consent. 91% to be exact out of 19,000 visitors in my study.
What’s a proper and legal implementation of a GDPR banner?
It’s a banner that doesn’t take much space
It allows people to browse your site even when ignoring the banner
It’s a banner that allows visitors to say “no” just as easy as they can say “yes”
As Alexa, Google Home, Siri, and other voice assistants have become fixtures in millions of homes, privacy advocates have grown concerned that their near-constant listening to nearby conversations could pose more risk than benefit to users. New research suggests the privacy threat may be greater than previously thought.
The findings demonstrate how common it is for dialog in TV shows and other sources to produce false triggers that cause the devices to turn on, sometimes sending nearby sounds to Amazon, Apple, Google, or other manufacturers. In all, researchers uncovered more than 1,000 word sequences—including those from Game of Thrones, Modern Family, House of Cards, and news broadcasts—that incorrectly trigger the devices.
“The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans,” one of the researchers, Dorothea Kolossa, said. “Therefore, they are more likely to start up once too often rather than not at all.”
That which must not be said
Examples of words or word sequences that provide false triggers include
Alexa: “unacceptable,” “election,” and “a letter”
Google Home: “OK, cool,” and “Okay, who is reading”
Siri: “a city” and “hey jerry”
Microsoft Cortana: “Montana”
The two videos below show a GoT character saying “a letter” and Modern Family character uttering “hey Jerry” and activating Alexa and Siri, respectively.
Accidental Trigger #1 – Alexa – Cloud
Accidental Trigger #3 – Hey Siri – Cloud
In both cases, the phrases activate the device locally, where algorithms analyze the phrases; after mistakenly concluding that these are likely a wake word, the devices then send the audio to remote servers where more robust checking mechanisms also mistake the words for wake terms. In other cases, the words or phrases trick only the local wake word detection but not algorithms in the cloud.
Unacceptable privacy intrusion
When devices wake, the researchers said, they record a portion of what’s said and transmit it to the manufacturer. The audio may then be transcribed and checked by employees in an attempt to improve word recognition. The result: fragments of potentially private conversations can end up in the company logs.
The research paper, titled “Unacceptable, where is my privacy?,” is the product of Lea Schönherr, Maximilian Golla, Jan Wiele, Thorsten Eisenhofer, Dorothea Kolossa, and Thorsten Holz of Ruhr University Bochum and Max Planck Institute for Security and Privacy. In a brief write-up of the findings, they wrote:
Our setup was able to identify more than 1,000 sequences that incorrectly trigger smart speakers. For example, we found that depending on the pronunciation, «Alexa» reacts to the words “unacceptable” and “election,” while «Google» often triggers to “OK, cool.” «Siri» can be fooled by “a city,” «Cortana» by “Montana,” «Computer» by “Peter,” «Amazon» by “and the zone,” and «Echo» by “tobacco.” See videos with examples of such accidental triggers here.
In our paper, we analyze a diverse set of audio sources, explore gender and language biases, and measure the reproducibility of the identified triggers. To better understand accidental triggers, we describe a method to craft them artificially. By reverse-engineering the communication channel of an Amazon Echo, we are able to provide novel insights on how commercial companies deal with such problematic triggers in practice. Finally, we analyze the privacy implications of accidental triggers and discuss potential mechanisms to improve the privacy of smart speakers.
The researchers analyzed voice assistants from Amazon, Apple, Google, Microsoft, and Deutsche Telekom, as well as three Chinese models by Xiaomi, Baidu, and Tencent. Results published on Tuesday focused on the first four. Representatives from Apple, Google, and Microsoft didn’t immediately respond to a request for comment.
The full paper hasn’t yet been published, and the researchers declined to provide a copy ahead of schedule. The general findings, however, already provide further evidence that voice assistants can intrude on users’ privacy even when people don’t think their devices are listening. For those concerned about the issue, it may make sense to keep voice assistants unplugged, turned off, or blocked from listening except when needed—or to forgo using them at all.
How many government demands for user data has Zoom received? We won’t know until “later this year,” an updated Zoom blog post now says.
The video conferencing giant previously said it would release the number of government demands it has received by June 30. But the company said it’s missed that target and has given no firm new date for releasing the figures.
It comes amid heightened scrutiny of the service after a number of security issues and privacy concerns came to light following a massive spike in its user base, thanks to millions working from home because of the coronavirus pandemic.
In a blog post today reflecting on the company’s turnaround efforts, chief executive Eric Yuan said the company has “made significant progress defining the framework and approach for a transparency report that details information related to requests Zoom receives for data, records or content.”
“We look forward to providing the fiscal [second quarter] data in our first report later this year,” he said.
Transparency reports offer rare insights into the number of demands or requests a company gets from the government for user data. These reports are not mandatory, but are important to understand the scale and scope of government surveillance.
Zoom said last month it would launch its first transparency report after the company admitted it briefly suspended the Zoom accounts of two U.S.-based accounts and one Hong Kong activist at the request of the Chinese government. The users, who were not based in China, held a Zoom call commemorating the anniversary of the Tiananmen Square massacre, an event that’s cloaked in secrecy and censorship in mainland China.
Twenty consumer and citizen rights groups have published an open letter [PDF] urging regulators to pay closer attention to Google parent Alphabet’s planned acquisition of Fitbit.
The letter describes the pending purchase as a “game-changer” that will test regulators’ resolve to analyse how the vast quantities of health and location data slurped by Google would affect broader market competition.
“Google could exploit Fitbit’s exceptionally valuable health and location datasets, and data collection capabilities, to strengthen its already dominant position in digital markets such as online advertising,” the group warned.
Signatories to the letter include US-based Color of Change, Center for Digital Democracy and the Omidyar Network, the Australian Privacy Foundation, and BEUC – the European Consumer Organisation.
Google confirmed its intent to acquire Fitbit for $2.1bn in November. The deal is still pending, subject to regulator approval. Google has sought the green light from the European Commission, which is expected to publish its decision on 20 July.
The EU’s executive branch can either approve the buy (with or without additional conditions) or opt to start a four-month investigation.
The US Department of Justice has also started its own investigation, requesting documents from both parties. If the deal is stopped, Google will be forced to pay a $250m termination fee to Fitbit.
Separately, the Australian Competition and Consumer Choice Commission (ACCC) has voiced concerns that the Fitbit-Google deal could have a distorting effect on the advertising market.
“Buying Fitbit will allow Google to build an even more comprehensive set of user data, further cementing its position and raising barriers to entry for potential rivals,” said ACCC chairman Rod Sims last month.
“User data available to Google has made it so valuable to advertisers that it faces only limited competition.”
The Register has asked Google and Fitbit for comment. ®
Updated at 14:06 UTC 02/07/20 to add
A Google spokesperson told The Reg: “Throughout this process we have been clear about our commitment not to use Fitbit health and wellness data for Google ads and our responsibility to provide people with choice and control with their data.
“Similar to our other products, with wearables, we will be transparent about the data we collect and why. And we do not sell personal information to anyone.”
This latest device succeeds the previous Librem 13 laptop, which ran for four generations, and includes a slightly bigger display, a hexa-core Ice Lake Intel Core i7 processor, gigabit Ethernet, and USB-C. As the name implies, the Librem 14 packs a 14-inch, 1920×1080 IPS display. Purism said this comes without increasing the laptop’s dimensions thanks to smaller bezels. You can find the full specs here.
Crucially, it is loaded with the usual privacy features found in Purism’s kit such as hardware kill switches that disconnect the microphone and webcam from the laptop’s circuitry. It also comes with the firm’s PureBoot tech, which includes Purism’s in-house CoreBoot BIOS replacement, and a mostly excised Intel Management Engine (IME).
The IME is a hidden coprocessor included in most of Chipzilla’s chipsets since 2008. It allows system administrators to remotely manage devices using out-of-band communications. But it’s also controversial in the security community since it’s somewhat of a black box.
There is little by way of public documentation. Intel hasn’t released the source code. And, to add insult to injury, it’s also proven vulnerable to exploitation in the past.
The company said that it continued sharing user data with approximately 5,000 developers even after their application’s access expired.
The incident is related to a security control that Facebook added to its systems following the Cambridge Analytica scandal of early 2018.
Responding to criticism that it allowed app developers too much access to user information, Facebook added at the time a new mechanism to its API that prevented apps from accessing a user’s data if the user did not use the app for more than 90 days.
However, Facebook said that it recently discovered that in some instances, this safety mechanism failed to activate and allowed some apps to continue accessing user information even past the 90-day cutoff date.
[…]
“From the last several months of data we have available, we currently estimate this issue enabled approximately 5,000 developers to continue receiving [user] information,” Papamiltiadis said.
The company didn’t clarify how many users were impacted, and had their data made available to app developers even after they stopped using the app.
Comcast has agreed to be the first home broadband internet provider to handle secure DNS-over-HTTPS queries for Firefox browser users in the US, Mozilla has announced.
This means the ISP, which has joined Moz’s Trusted Recursive Resolver (TRR) Program, will perform domain-name-to-IP-address lookups for subscribers using Firefox via encrypted HTTPS channels. That prevents network eavesdroppers from snooping on DNS queries or meddling with them to redirect connections to malicious webpages.
Last year Comcast and other broadband giants were fiercely against such safeguards, though it appears Comcast has had a change of heart – presumably when it figured it could offer DNS-over-HTTPS services as well as its plain-text DNS resolvers.
At some point in the near future, Firefox users subscribed to Comcast will use the ISP’s DNS-over-HTTPS resolvers by default, though they can opt to switch to other secure DNS providers or opt-out completely.
[…]
Incredibly, DNS-over-HTTPS was heralded as a way to prevent, among others, ISPs from snooping on and analyzing their subscribers’ web activities to target them with adverts tailored to their interests, or sell the information as a package to advertisers and industry analysts. And yet, here’s Comcast providing a DNS-over-HTTPS service for Firefox fans, allowing it to inspect and exploit their incoming queries if it so wishes. Talk about a fox guarding the hen house.
ISPs “have access to a stream of a user’s browsing history,” Marshall Erwin, senior director of trust and security at, er, Mozilla, warned in November. “This is particularly concerning in light of the rollback of the broadband privacy rules, which removed guardrails for how ISPs can use your data. The same ISPs are now fighting to prevent the deployment of DNS-over-HTTPS.”
Mozilla today insisted its new best buddy Comcast is going to play nice and follow the DNS privacy program’s rules.
Names, adresses and mobile numbers have been sold for fraud using WhatsApp. Most of these numbers come from callcentres, mainly those selling energy contracts. The fresher a lead is, the more they are worth: betwween 25 cents and 2 euros. The money is usually transferred through mules, who keep a percentage of the proceeds.
In the case of Firefox users, some discovered that the new default Windows 10 browser, which is shipped to their devices via Windows Update, sometimes imports the data from Mozilla’s application even if they don’t give their permission.
Some of these Firefox users decided to kill the initial setup process of Microsoft Edge, only to discover that despite the wizard shutting down prematurely, the browser still copied data stored by Mozilla’s browser.
Several users confirmed on reddit that this behavior happened on their computers too.
Silent data importing
“Love rebooting my computer to get treated to a forced tour of a browser I’m not going to use that I have to force close through the task manager to escape, and then finding out it’s been copying over my data from Firefox without permission,” one user explains.
“Unless you close it via task manager instead of doing the forced setup, in which case it copies your data anyway, and the worst part is most people will never know what it’s doing because they’ll never open it again. I only reopened it because I noticed it automatically signed me into the browser as it was closing and wanted to sign out before not touching it again, at which point I discovered it had already copied my Firefox data over despite the fact I didn’t go through the setup process,” someone else explains.
Microsoft has remained tight-lipped on this, so for the time being, it’s still not known why Edge imports Firefox data despite the initial wizard actually killed off manually by the user.
Users who don’t want to be offered the new Edge on Windows Update can turn to the dedicated toolkit that Microsoft released earlier this year, while removing the browser is possible by just uninstalling the update from the device.
Google has introduced “continuous match mode” for apps on its voice-powered Assistant platform, where it will listen to everything without pausing. At the same time it has debuted related developer tools, new features, and the ability to display web content on its Smart Display hardware using the AMP component framework.
The Chocolate Factory has big plans for its voice assistant. “We consider voice to be the biggest paradigm shift around us,” said director of product Baris Gultekin, speaking at the Voice Global summit, where the new features were introduced.
The goal is “ambient computing”, where you can interact with the big G anywhere at any time, so pervasively that you do not notice it. Voice interaction is a key part of this since it extends the ability to perform searches or run applications to scenarios where tapping a keyboard or touching a display are not possible.
Google Assistant exists in many guises such as on smartphones and watches, TVs, PCs, and also on dedicated hardware, such as the voice-only Google Home and Google Home Mini, or with “smart display” screens on the Google Nest Hub or devices from Lenovo and Harman. While assistant devices have been popular, Android phones (which nag you to set up the Assistant) must form the largest subset of users. Over all the device types, the company claims over 500 million active users.
[…]
Actions Builder will “replace DialogFlow as the preferred way to develop actions on the assistant,” said Shodjai.
Google’s new Action Builder at work
Trying out the new Action Builder, we discovered that running an action under development is impossible if you have the Web and App Activity permission, which lets Google keep a record of your actions, disabled. A dialog appears prompting you to enable it. It is a reminder of how Google Assistant is entwined with the notion that you give Google your data in return for personalised experiences.
[…]
“Sometimes you want to build experiences that enable the mic to remain open, to enable users to speak more naturally with your action, without waiting for a change in mic states,” said Shodjai at the summit and in the developer post.
“Today we are announcing an early access program for Continuous Match Mode, which allows the assistant to respond immediately to user’s speech enabling more natural and fluid experiences. This is done transparently, so that before the mic opens the assistant will announce, ‘the mic will stay open temporarily’, so users know they can now speak freely without waiting for additional prompts.”
The mode is not yet publicly documented. The demonstrated example was for a game with jolly cartoon pictures; but there may be privacy implications since in effect this setting lets the action continue to listen to everything while the mode is active.
Shodjai did not explain how users will end a Continuous Match Mode session but presumably this will be either after a developer-defined exit intent, or via a system intent as with existing actions. Until that happens, the action will be able to keep running.
Just as with personalisation via tracking and data collection, privacy and pervasive computing do not sit comfortably together, and with the new Continuous Match Mode a little more privacy slips away.
If you’re a free Zoom user, and waiting for the company to roll out end-to-end encryption for better protection of your calls, you’re out of luck. Free calls won’t be encrypted, and law enforcement will be able to access your information in case of ‘misuse’ of the platform.
Zoom CEO Eric Yuan today said that the video conferencing app’s upcoming end-to-end encryption feature will be available to only paid users. After announcing the company’s financial results for Q1 2020, Yuan said the firm wants to keep this feature away from free users to work with law enforcement in case of the app’s misuse:
Free users, for sure, we don’t want to give that [end-to-end encryption]. Because we also want to work it together with FBI and local law enforcement, in case some people use Zoom for bad purpose.
In the past, platforms with end-to-end encryption, such as WhatsApp, have faced heavy scrutiny in manycountries because they were unable to trace the origins of problematic and misleading messages. Zoom likey wants to avoid being in such a position, and wants to comply with local laws to keep operating across the globe.
Alex Stamos, working as a security consultant with Zoom, said it wants to catch repeat offenders for hate speech or child exploitative content by not offering end-to-end encryption t0 free users.
Zoom is dealing with some serious safety issues. When people disrupt meetings (sometimes with hate speech, CSAM, exposure to children and other illegal behaviors) that can be reported by the host. Zoom is working with law enforcement on the worst repeat offenders.
In March, The Intercept published a report stating that the company doesn’t use end-to-end encryption, despite claiming that on its website and security white paper. Later, Zoom apologized and issued a clarification to specify it didn’t provide the feature at that time.
Last month, the company acquired Keybase.io, an encryption-based identity service, to build its end-to-end encryption offering. Yuan said today that the company got a lot of feedback from users on encryption, and it’s working out on executing it. However, he didn’t specify a release date for the feature.
According to the Q1 2020 results, the company grew 169% year-on-year in terms of revenue. Zoom has more than 300 million daily participants attending meetings through the platform.
The GSM Association, the body that represents mobile carriers and influences the development of standards, has suggested its members bake virus contact-tracing functionality into their own bundled software.
The body today popped out a paper [PDF] on contact-tracing apps. After some unremarkable observations about the need for and operations of such apps, plus an explanation of the centralised vs. centralised data storage debate, the paper offers members a section titled: “How the mobile industry can help.”
That section suggests carriers could help to improve the reach of and disseminate such apps with the following three tactics:
Integrate software into own apps (e.g. customer self-care app), if this is part of the national strategy
Pre-install on devices
Communicate to / educate subscribers
The first item may prove unworkable given Google and Apple have indicated they’ll only register coronavirus-related apps if they’re developed by governments and their health agencies. The two tech giants have also said they’ll only allow one app per jurisdiction to use their pro-privacy COVID-19 contact-tracing interface. The second suggestion also has potential pitfalls as contact-tracing apps are generally opt-in affairs. Carriers would need to be sensitive about how they are installed and the user experience offered if the apps ask for registration.
Contact tracing apps have the potential to slow the spread of COVID-19. But without proper security safeguards, some fear they could put users’ data and sensitive info at risk. Until now, that threat has been theoretical. Today, Amnesty International reports that a flaw in Qatar’s contact tracing app put the personal information of more than one million people at risk.
The flaw, now fixed, made info like names, national IDs, health status and location data vulnerable to cyberattacks. Amnesty’s Security Lab discovered the flaw on May 21st and says authorities fixed it on May 22nd. The vulnerability had to do with QR codes that included sensitive info. The update stripped some of that data from the QR codes and added a new layer of authentication to prevent foul play.
Qatar’s app, called EHTERAZ, uses GPS and Bluetooth to track COVID-19 cases, and last week, authorities made it mandatory. According to Amnesty, people who don’t use the app could face up to three years in prison and a fine of QR 200,000 (about $55,000).
“This incident should act as a warning to governments around the world rushing out contact tracing apps that are too often poorly designed and lack privacy safeguards. If technology is to play an effective role in tackling the virus, people need to have confidence that contact tracing apps will protect their privacy and other human rights,” said Claudio Guarnieri, head of Amnesty International’s Security Lab.
Apple may still be recording and transcribing conversations captured by Siri on its phones, despite promising to put an end to the practice nine months ago, claims a former Apple contractor who was hired to listen into customer conversations.
In a letter [PDF] sent to data protection authorities in Europe, Thomas Le Bonniec expresses his frustration that, despite exposing in April 2019 that Apple has hired hundreds of people to analyze recordings that its users were unaware had been made, nothing appears to have changed.
Those recordings were captured by Apple’s Siri digital assistant, which constantly listens out for potential voice commands to obey. The audio was passed to human workers to transcribe, label, and analyze to improve Siri’s neural networks that process what people say. Any time Siri heard something it couldn’t understand – be it a command or someone’s private conversation or an intimate moment – it would send a copy of the audio to the mothership for processing so that it could be retrained to do better next time.
Le Bonniec worked for Apple subcontractor Globe Technical Services in Ireland for two months, performing this manual analysis of audio recorded by Siri, and witnessed what he says was a “massive violation of the privacy of millions of citizens.”
“All over the world, people had their private life recorded by Apple up to the most intimate and sensitive details,” he explained. “Enormous amounts of personal data were collected, stored and analyzed by Apple in an opaque way. These practices are clearly at odds with the company’s privacy-driven policies and should be urgently investigated by Data Protection Authorities and Privacy watchdogs.”
But despite the fact that Apple acknowledged it was in fact transcribing and tagging huge numbers of conversations that users were unaware had been recorded by their Macs and iOS devices, promised a “thorough review of our practices and policies,” and apologized that it hadn’t “been fully living up to our high ideals,” Le Bonniec says nothing has changed.
“Nothing has been done to verify if Apple actually stopped the programme. Some sources already confirmed to me that Apple has not,” he said.
“I believe that Apple’s statements merely aim to reassure their users and public authorities, and they do not care for their user’s consent, unless being forced to obtain it by law,” says the letter. “It is worrying that Apple (and undoubtedly not just Apple) keeps ignoring and violating fundamental rights and continues their massive collection of data.”
In effect, he argues, “big tech companies are basically wiretapping entire populations despite European citizens being told the EU has one of the strongest data protection laws in the world. Passing a law is not good enough: it needs to be enforced upon privacy offenders.”
Not good
How bad is the situation? According to Le Bonniec: “I listened to hundreds of recordings every day, from various Apple devices (e.g. iPhones, Apple Watches, or iPads). These recordings were often taken outside of any activation of Siri, e.g. in the context of an actual intention from the user to activate it for a request.
“These processings were made without users being aware of it, and were gathered into datasets to correct the transcription of the recording made by the device. The recordings were not limited to the users of Apple devices, but also involved relatives, children, friends, colleagues, and whoever could be recorded by the device.
“The system recorded everything: names, addresses, messages, searches, arguments, background noises, films, and conversations. I heard people talking about their cancer, referring to dead relatives, religion, sexuality, pornography, politics, school, relationships, or drugs with no intention to activate Siri whatsoever.”
The US Senate has voted to give law enforcement agencies access to web browsing data without a warrant, dramatically expanding the government’s surveillance powers in the midst of the COVID-19 pandemic.
The power grab was led by Senate majority leader Mitch McConnell as part of a reauthorization of the Patriot Act, which gives federal agencies broad domestic surveillance powers. Sens. Ron Wyden (D-OR) and Steve Daines (R-MT) attempted to remove the expanded powers from the bill with a bipartisan amendment.
But in a shock upset, the privacy-preserving amendment fell short by a single vote after several senators who would have voted “Yes” failed to show up to the session, including Bernie Sanders. 9 Democratic senators also voted “No,” causing the amendment to fall short of the 60-vote threshold it needed to pass.
“The Patriot Act should be repealed in its entirety, set on fire and buried in the ground,” Evan Greer, the deputy director of Fight For The Future, told Motherboard. “It’s one of the worst laws passed in the last century, and there is zero evidence that the mass surveillance programs it enables have ever saved a single human life.”