The Linkielist

Linking ideas with the world

The Linkielist

COVID-19 tracing without an app? Google and Apple will ram it down your throat

Google and Apple have updated their COVID-19 contact-tracing tool to make it possible to notify users of potential exposures to the novel coronavirus without an app.

The new Exposure Notifications Express spec is baked into iOS 13.7, which emerged this week and will appear in an Android update due later this month.

This is not, repeat not, pervasive Bluetooth surveillance. The tool requires users to opt in, although public health authorities can use the tool to send notifications suggesting that residents do so.

Those who choose to participate agree to have their device use Bluetooth to search for other nearby opted-in devices, with an exchange of anonymised identifiers used to track encounters. If a user tests positive, and agrees to notify authorities, other users will be told that they are at risk and should act accordingly.

The update is designed to let health authorities use Bluetooth-powered contact-tracing without having to build their own apps. It’s still non-trivial to play, as the system requires one server to verify test results and another to run both contact-tracing apps and the app-free service.

Apple has published a succinct explainer here and Google has offered up code for notifications server on GitHub.

A couple of dozen US states have signed up for the new tool but other jurisdictions – among them India, Singapore and Australia – are persisting with their own approaches on the basis that the Apple/Google tech makes it harder for their manual contact-tracers to access information.

Source: COVID-19 tracing without an app? There’s an iOS and Android update for that • The Register

Considering the work both companies do with China and other friendly states, it would not surprise me that the “user opt in” feature becomes an “all users opt in without their knowing because the state is the people and the state knows best” feature in some places.

US Border Patrol Says They Can Create Central Repository Of Traveler Emails, calendar, etc, Keep Them For 75 Years

The U.S. government has taken the opportunity during the global pandemic, when people aren’t traveling out of the country much, to roll out a new platform for storing information they believe they are entitled to take from people crossing the border. A new filing reveals how the U.S. Border Patrol will store data from traveler devices centrally, keeping it backed up and searchable for up to 75 years.

On July 30 the Department of Homeland Security published a privacy impact assessment detailing the electronic data that they may choose to collect from people crossing the border – and what happens to that data.

  • Border Patrol claims the right to search laptops, thumb drives, cell phones, and other
    devices capable of storing electronic information” and when they call it a ‘border search the can do this not just when you’re “crossing the U.S. border” in either direction (i.e. when you’re leaving, not just when you’re entering the country) but even “at the extended border” which generally means within 100 miles of the border, which encompasses where two-thirds of the U.S. population lives.
  • They needed an updated privacy impact assessment because of a new “enterprise-wide solution to manage and analyze certain types of information and metadata USBP collects from electronic devices” – and they they actually keep on file.

Border Patrol will “acquire a mirror copy of the data on the device” they take from a traveler and store it locally. Before uploading it to their network they check to make sure there’s no porn on it (so they search your devices to find porn first). Then once they’ve determined it’s “clean” they transfer the data first to an encrypted thumb drive and then to the Border Patrol-side system called PLX.

Examples of what they plan to keep from travelers’ devices include e-mails; videos and pictures; texts and chat messages; financial accounts and transactions; location history; web browser bookmarks; tasks list; calendar; call logs; contracts. Information is stored for 75 years although if it’s not related to any crime it may be deleted after 20 years.

The government emphasizes they’ve been collecting this information, what’s changed is simply that they’ll be storing it in a central system where everything “will now by accessible to a larger number of USBP agents with no nexus” to suspected illegal activity. They promise, though, to restrict access and train staff not to do anything they aren’t supposed to. And they don’t see risk to privacy because they’ve published a notice (that I’m now writing about) telling you how your privacy may be violated.

Electronic device searches have been on the rise. Between October 2008 and June 2010 6500 devices were searched. In 2016 there were 10,000 device searches, and 30,200 in 2017.

It’s not clear though that these searches are all actually legal. In November 2019 a federal judge in Boston ruled that forensic searches of cell phones require at least reasonable suspicion “that the devices contain contraband.”

Source: US Border Patrol Says They Can Create Central Repository Of Traveler Emails, Keep Them For 75 Years – View from the Wing

235 Million Instagram, TikTok And YouTube User Profiles Exposed In Massive Data Leak

it was such an unsecured database that the Comparitech researchers, led by Bob Diachenko, discovered on August 1, leaving the personal profile data of nearly 235 million Instagram, TikTok and YouTube users up for grabs.

The data was spread across several datasets; the most significant being two coming in at just under 100 million each and containing profile records apparently scraped from Instagram. The third-largest was a dataset of some 42 million TikTok users, followed by just under 4 million YouTube user profiles.

MORE FROM FORBESGot An Email From A Hacker With Your Password? Do These 3 Things

Comparitech says that, based on the samples it collected, one in five records contained either a telephone number or email address. Every record also included at least some, sometimes all, the following information:

  • Profile name
  • Full real name
  • Profile photo
  • Account description

Statistics about follower engagement, including:

  • Number of followers
  • Engagement rate
  • Follower growth rate
  • Audience gender
  • Audience age
  • Audience location
  • Likes
  • Last post timestamp
  • Age
  • Gender

“The information would probably be most valuable to spammers and cybercriminals running phishing campaigns,” Paul Bischoff, Comparitech editor, says. “Even though the data is publicly accessible, the fact that it was leaked in aggregate as a well-structured database makes it much more valuable than each profile would be in isolation,” Bischoff adds. Indeed, Bischoff told me that it would be easy for a bot to use the database to post targeted spam comments on any Instagram profile matching criteria such as gender, age or number of followers.

Tracing the source of the leaked data

So, where did all this data originate? The researchers suggest that the evidence, including dataset names, pointed to a company called Deep Social. However, Deep Social was banned by both Facebook and Instagram in 2018 after scraping user profile data. The company was wound down sometime after this.

A Facebook company spokesperson told me that “scraping people’s information from Instagram is a clear violation of our policies. We revoked Deep Social’s access to our platform in June 2018 and sent a legal notice prohibiting any further data collection.”

Once the researchers found the database and the clues to its origin, “we sent an alert to Deep Social, assuming the data belonged to them,” Bischoff says. The administrators of Deep Social then forwarded the disclosure to a Hong Kong-registered social media influencer data-marketing company called Social Data. “Social Data shut down the database about three hours after our initial email,” Bischoff says.

[…]

Source: 235 Million Instagram, TikTok And YouTube User Profiles Exposed In Massive Data Leak

Securus sued for ‘recording attorney-client jail calls, handing them to cops’ – months after settling similar lawsuit and charging more than 100x normal price for the calls. Hey, monopolies!

Jail phone telco Securus provided recordings of protected attorney-client conversations to cops and prosecutors, it is claimed, just three months after it settled a near-identical lawsuit.

The corporate giant controls all telecommunications between the outside world and prisoners in American jails that contract with it. It charges far above market rate, often more than 100 times, while doing so.

It has now been sued by three defense lawyers in Maine, who accuse the corporation of recording hundreds of conversations between them and their clients – something that is illegal in the US state. It then supplied those recordings to jail administrators and officers of the law, the attorneys allege.

Though police officers can request copies of convicts’ calls to investigate crimes, the cops aren’t supposed to get attorney-client-privileged conversations. In fact, these chats shouldn’t be recorded in the first place. Yet, it is claimed, Securus not only made and retained copies of these sensitive calls, it handed them to investigators and prosecutors.

“Securus failed to screen out attorney-client privileged calls, and then illegally intercepted these calls and distributed them to jail administrators who are often law enforcers,” the lawsuit [PDF] alleged. “In some cases the recordings have been shared with district attorneys.”

The lawsuit claims that over 800 calls covering 150 inmates and 30 law firms have been illegally recorded in the past 12 months, and it provides a (redacted) spreadsheet of all relevant calls.

[…]

Amazingly, this is not the first time Securus has been accused of this same sort of behavior. Just three months ago, in May this year, the company settled a similar class-action lawsuit this time covering jails in California.

That time, two former prisoners and a criminal defense attorney sued Securus after it recorded more than 14,000 legally protected conversations between inmates and their legal eagles. Those recordings only came to light after someone hacked the corp’s network and found some 70 million stored conversations, which were subsequently leaked to journalists.

[…]

Securus has repeatedly come under fire for similar complaints of ethical and technological failings. It was at the center of a huge row over location data after it was revealed it was selling location data on people’s phones to the police through a web portal.

The telecoms giant was also criticized for charging huge rates for video calls, between $5.95 and $7.99 for a 20-minute call, at a jail where the warden banned in-person visits but still required relatives to travel to the jail and sit in a trailer in the prison’s parking lot to talk to their loved ones through a screen.

Securus is privately held so it doesn’t make its financial figures public. A leak in 2014 revealed that it made a $115m profit on $405m in revenue for that year.

Source: Securus sued for ‘recording attorney-client jail calls, handing them to cops’ – months after settling similar lawsuit • The Register

US Secret Service Bought Access to Bable Street’s Locate X Spy Tool for warrantless surveillance

Babel Street is a shadowy organization that offers a product called Locate X that is reportedly used to gather anonymized location data from a host of popular apps that users have unwittingly installed on their phones. When we say “unwittingly,” we mean that not everyone is aware that random innocuous apps are often bundling and anonymizing their data to be sold off to the highest bidder.

Back in March, Protocol reported that U.S. Customs and Border Protection had a contract to use Locate X and that sources inside the secretive company described the system’s capabilities as allowing a user “to draw a digital fence around an address or area, pinpoint mobile devices that were within that area, and see where else those devices have traveled, going back months.”

Protocol’s sources also said that the Secret Service had used the Locate X system in the course of investing a large credit card skimming operation. On Monday, Motherboard confirmed the investigation when it published an internal Secret Service document it acquired through a Freedom of Information Act (FOIA) request. (You can view the full document here.)

The document covers a relationship between Secret Service and Babel Street from September 28, 2017, to September 27, 2018. In the past, the Secret Service has reportedly used a seperate social media surveillance product from Babel Street, and the newly-released document totals fees paid after the addition of the Locate X license as $1,999,394.

[…]

Based on Fourth Amendment protections, law enforcement typically has to get a warrant or court order to seek to obtain Americans’ location data. In 2018, the Supreme Court ruled that cops still need a warrant to gather cellphone location data from network providers. And while law enforcement can obtain a warrant for specific cases as it seeks to view location data from a specific region of interest at a specific time, the Locate X system saves government agencies the time of going through judicial review with a next-best-thing approach.

The data brokerage industry benefits from the confusion that the public has about what information is collected and shared by various private companies that are perfectly within their legal rights. You can debate whether it’s acceptable for private companies to sell this data to each other for the purpose of making profits. But when this kind of sale is made to the U.S. government, it’s hard to argue that these practices aren’t, at least, violating the spirit of our constitutional rights.

Source: Secret Service Bought Access to Bable Street’s Locate X Spy Tool

New Toyotas will upload data to AWS to help create custom insurance premiums based on driver behaviour, send your data to others too

Toyota already operates a “Mobility Services Platform” that it says helps it to “develop, deploy, and manage the next generation of data-driven mobility services for driver and passenger safety, security, comfort, and convenience”.

That data comes from a device called the “Data Communication Module” (DCM) that Toyota fits into many models in Japan, the USA and China.

Toyota reckons the data could turn into “new contextual services such as car share, rideshare, full-service lease, and new corporate and consumer services such as proactive vehicle maintenance notifications and driving behavior-based insurance.”

Toyota's connected car vision

Toyota’s connected car vision. Click to enlarge

The company has touted that vision since at least the year 2016, but precious little evidence of it turning into products is available.

Which may be why Toyota has signed with AWS for not just cloud tech but also professional services.

The two companies say their joint efforts “will help build a foundation for streamlined and secure data sharing throughout the company and accelerate its move toward CASE (Connected, Autonomous/Automated, Shared and Electric) mobility technologies.”

Neither party has specified just which bits of the AWS cloud Toyota will take for a spin but it seems sensible to suggest the auto-maker is going to need lots of storage and analytics capabilities, making AWS S3 and Kinesis likely candidates for a test drive.

Whatever Toyota uses, prepare for privacy ponderings because while cheaper car insurance sounds lovely, having an insurer source driving data from a manufacturer has plenty of potential pitfalls.

Source: Oh what a feeling: New Toyotas will upload data to AWS to help create custom insurance premiums based on driver behaviour • The Register

No, this isn’t a good thing and I hope there’s an opt out

Privacy Shield no longer valid: Joint Press Statement from U.S. Secretary of Commerce Wilbur Ross and European Commissioner for Justice Didier Reynders

The U.S. Department of Commerce and the European Commission have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy Shield framework to comply with the July 16 judgment of the Court of Justice of the European Union in the Schrems II case. This judgment declared that this framework is no longer a valid mechanism to transfer personal data from the European Union to the United States.

The European Union and the United States recognize the vital importance of data protection and the significance of cross-border data transfers to our citizens and economies. We share a commitment to privacy and the rule of law, and to further deepening our economic relationship, and have collaborated on these matters for several decades.

Source: Joint Press Statement from U.S. Secretary of Commerce Wilbur Ross and European Commissioner for Justice Didier Reynders | U.S. Department of Commerce

Lawmakers Ask California DMV How It Makes $50 Million a Year Selling Drivers’ Data

A group of nearly a dozen lawmakers led by member of Congress Anna Eshoo wrote to the California Department of Motor Vehicles (DMV) on Wednesday looking for answers on how and why the organization sells the personal data of residents. The letter comes after Motherboard revealed last year that the DMV was making $50 million annually from selling drivers’ information.

The news highlights how selling personal data is not limited to private companies, but some government entities follow similar practices too.

“What information is being sold, to whom it is sold, and what guardrails are associated with the sale remain unclear,” the letter, signed by congress members including Ted Lieu, Barbara Lee, and Mike Thompson, as well as California Assembly members Kevin Mullin and Mark Stone, reads.

Specifically, the letter asks what types of organizations has the DMV disclosed drivers’ data to in the past three years. Motherboard has previously reported on how other DMVs around the country sold such information to private investigators, including those hired to spy on suspected cheating spouses. In an earlier email to Motherboard, the California DMV said data requesters may include insurance companies, vehicle manufacturers, and prospective employers.

The information sold in general by DMVs includes names, physical addresses, and car registration information. Multiple other DMVs previously confirmed they have cut-off access to some clients after they abused the data.

On Wednesday, the California DMV said in an emailed statement, “The DMV does not sell driver information for marketing purposes or to generate revenue outside of the cost of administering its requester program—which only provides certain driver and vehicle related information as statutorily required.”

“The DMV takes its obligation to protect personal information very seriously. Information is only released according to California law, and the DMV continues to review its release practices to ensure information is only released to authorized persons/entities and only for authorized purposes. For example, if a car manufacturer is required to send a recall notice to thousands of owners of a particular model of car, the DMV may provide the car manufacturer with information on California owners of this particular model through this program,” the statement added.

After Motherboard’s earlier investigation into the sale of DMV data to private investigators, senators criticized the practice. Bernie Sanders more specifically said that DMVs should not profit from selling such data.

“In today’s ever-increasing digital world, our private information is too often stolen, abused, used for profit or grossly mishandled,” the new letter from lawmakers reads. “It’s critical that the custodians of the personal information of Americans—from corporations to government agencies—be held to high standards of data protection in order to restore the right of privacy in our country.”

Source: Lawmakers Ask California DMV How It Makes $50 Million a Year Selling Drivers’ Data

Private equity wants to own your DNA – Blackstone buys Ancestry at $250,- per person

The nation’s largest private equity firm is interested in buying your DNA data. The going rate: $261 per person. That appears to be what Blackstone, the $63 billion private equity giant, is willing to pay for genetic data controlled by one of the major companies gathering it from millions of customers.

Earlier this week, Blackstone announced it was paying $4.7 billion to acquire Ancestry.com, a pioneer in pop genetics that was launched in the 1990s to help people find out more about their family heritage.

Ancestry’s customers get an at-home DNA kit that they send back to the company. Ancestry then adds that DNA information to its database and sends its users a report about their likely family history. The company will also match you to other family members in its system, including distant cousins you may or may not want to hear from. And for up to $400 a year, you can continue to search Ancestry’s database to add to your knowledge of your family tree.

Ancestry has some information, mostly collected from public databases, on hundreds of millions of individuals. But its most valuable information is that of the people who have taken its DNA tests, which totals 18 million. And at Blackstone’s $4.7 billion purchase price that translates to just over $250 each.

[…]

Source: Private equity wants to own your DNA – CBS News

Whoops, our bad, we just may have ‘accidentally’ left Google Home devices recording your every word, sound, sorry

Your Google Home speaker may have been quietly recording sounds around your house without your permission or authorization, it was revealed this week.

The Chocolate Factory admitted it had accidentally turned on a feature that allowed its voice-controlled AI-based assistant to activate and snoop on its surroundings. Normally, the device only starts actively listening in and making a note of what it hears after it has heard wake words, such as “Ok, Google” or “Hey, Google,” for privacy reasons. Prior to waking, it’s constantly listening out for those words, but is not supposed to keep a record of what it hears.

Yet punters noticed their Google Homes had been recording random sounds, without any wake word uttered, when they started receiving notifications on their phone that showed the device had heard things like a smoke alarm beeping, or glass breaking in their homes – all without giving their approval.

Google said the feature had been accidentally turned on during a recent software update, and it has now been switched off, Protocol reported. It may be that this feature is or was intended to be used for home security at some point: imagine the assistant waking up whenever it hears a break in, for instance. Google just bought a $450m, or 6.6 per cent, stake in anti-burglary giant ADT.

Source: Whoops, our bad, we just may have ‘accidentally’ left Google Home devices recording your every word, sound, sorry • The Register

Australian government sues Google for misleading consumers in Doubleclick data collection

The Australian government has filed its second lawsuit against Google in less than a year over privacy concerns, this time alleging the tech giant misled Australian consumers in an attempt to gather information for targeted ads. The Australian Competition and Consumers Commission (ACCC), the country’s consumer watchdog, says Google didn’t obtain explicit consent from consumers to collect personal data, according to a statement.

The ACCC cites a 2016 change to Google’s policy in which the company began collecting data about Google account holders’ activity on non-Google sites. Previously, this data was collected by ad-serving technology company DoubleClick and was stored separately, not linked to users’ Google accounts. Google acquired DoubleClick in 2008, and the 2016 change to Google’s policy meant Google and DoubleClick’s data on consumers were combined. Google then used the beefed-up data to sell even more targeted advertising.

From June 2016 to December 2018, Google account holders were met with a pop-up that explained “optional features” to accounts regarding how the company collected their data. Consumers could click “I agree,” and Google would begin collecting a “wide range of personally identifiable information” from them, according to the ACCC. The lawsuit contends that the pop-up didn’t adequately explain what consumers were agreeing to.

“The ACCC considers that consumers effectively pay for Google’s services with their data, so this change introduced by Google increased the ‘price’ of Google’s services, without consumers’ knowledge,” said ACCC Chair Rod Sims. Had more consumers sufficiently understood Google’s change in policy, many may not have consented to it, according to the ACCC.

Google told the Associated Press it disagrees with the ACCC’s allegations, and says Google account holders had been asked to “consent via prominent and easy-to-understand notifications.” It’s unclear what penalty the ACCC is seeking with the lawsuit.

Last October, the ACCC sued Google claiming the company misled Android users about the ability to opt out of location tracking on phones and tablets. That case is headed to mediation next week, according to a February Computer World article.

Source: Australian government sues Google for misleading consumers in data collection | Engadget

See When Other Apps Use Your Microphone or Camera With This Android App

you can get this functionality by downloading and installing a simple app from the Google Play Store: Access Dots. It’s free, it’s easy, and it helps you up your Android’s security game. I would almost call it a must-install for anyone, because it’s as unobtrusive as it is helpful.

Download and launch the app, and you’ll see one simple setting you have to enable. That’s all you have to do to fire up Access Dots’ basic functionality.

Illustration for article titled See When Other Apps Use Your Microphone or Camera With This Android App
Screenshot: David Murphy

Well, that and tapping on the new “Access Dots” listing in your Accessibility settings, and then enabling the service there, too.

Illustration for article titled See When Other Apps Use Your Microphone or Camera With This Android App
Screenshot: David Murphy

Head back to your Android’s Home screen and…you won’t see anything. Zilch. That’s the point. Pull up your Camera app, however, and you’ll see a big green icon appear in the upper-right corner of your device. Tap on your Google Assistant’s microphone icon, and you’ll see an orange dot; the same as what iOS 14 users see.

Illustration for article titled See When Other Apps Use Your Microphone or Camera With This Android App
Screenshot: David Murphy

If you don’t like these colors, you can change them to whatever you want in Access Dots’ settings. You can even change the location of said dot, as well as its size. Tap on the little “History” icon in Access Dots’ main UI—you can’t miss it—and you’ll even be able to browse a log of which apps requested camera of microphone access and for how long they used it:

Though I’m not a huge fan of how many ads litter the Access Dots app, I respect someone’s need to make a little cash. You only see them when you launch the app. Otherwise, all you’ll see on your phone are those dots. That’s not a terrible trade-off, I’d say, given how much this simple security app can do.

Source: See When Other Apps Use Your Microphone or Camera With This Android App

We’re suing Google for harvesting our personal info even though we opted out of Chrome sync – netizens

A handful of Chrome users have sued Google, accusing the browser maker of collecting personal information despite their decision not to sync data stored in Chrome with a Google Account.

The lawsuit [PDF], filed on Monday in a US federal district court in San Jose, California, claimed Google promises not to collect personal information from Chrome users who choose not to sync their browser data with a Google Account but does so anyway.

“Google intentionally and unlawfully causes Chrome to record and send users’ personal information to Google regardless of whether a user elects to Sync or even has a Google account,” the complaint stated.

Filed on behalf of “unsynced” plaintiffs Patrick Calhoun, Elaine Crespo, Hadiyah Jackson and Claudia Kindler – all said to have stopped using Chrome and to wish to return to it, rather than use a different browser, once Google stops tracking unsynced users – the lawsuit cited the Chrome Privacy Notice.

Since 2016, that notice has promised, “You don’t need to provide any personal information to use Chrome.” And since 2019, it has said, “the personal information that Chrome stores won’t be sent to Google unless you choose to store that data in your Google Account by turning on sync,” with earlier versions offering variants on that wording.

Nonetheless, whether or not account synchronization has been enabled, it’s claimed, Google uses Chrome to collect IP addresses linked to user agent data, identifying cookies, unique browser identifiers called X-Client Data Headers, and browsing history. And it does so supposedly in violation of federal wiretap laws and state statutes.

Google then links that information with individuals and their devices, it’s claimed, through practices like cookie syncing, where cookies set in a third-party context get associated with cookies set in a first-party context.

“Cookie synching allows cooperating websites to learn each other’s cookie identification numbers for the same user,” the complaint says. “Once the cookie synching operation is complete, the two websites exchange information that they have collected and hold about a user, further making these cookies ‘Personal Information.'”

The litigants pointed to Google’s plan to phase out third-party cookies, and noted Google doesn’t need cookies due to the ability of its X-Client-Data Header to uniquely identify people.

Source: We’re suing Google for harvesting our personal info even though we opted out of Chrome sync – netizens • The Register

Twitter Contractors Abused Access to Beyoncé’s Account: Report

Twitter contractors with high-level administrative access to accounts regularly abused their privileges to spy on celebrities including Beyoncé, including approximating their movements via internet protocol addresses, according to a report by Bloomberg.

Over 1,500 workers and contractors at Twitter who handle internal support requests and manage user accounts have high-level privileges that enable them to override user security settings and reset their accounts via Twitter’s backend, as well as view certain details of accounts like IP addresses, phone numbers, and email addresses.

[…]

Two of the former Twitter employees told Bloomberg that projects such as enhancing security of “the system that houses Twitter’s backup files or enhancing oversight of the system used to monitor contractor activity were, at times, shelved for engineering products designed to enhance revenue.” In the meantime, some of those with access (some of whom were contractors with Cognizant at up to six separate work sites) abused it to view details including IP addresses of users. Executives didn’t prioritize policing the internal support team, two of the former employees told Bloomberg, and at times Twitter security allegedly had trouble tracking misconduct due to sheer volume.

A system was in place to create access logs, but it could be fooled by simply creating bullshit support tickets that made the spying appear legitimate; two of the former employees told Bloomberg that from 2017 to 2018 members of the internal support team “made a kind of game out of” the workaround. The security risks inherent to granting access to so many people were reportedly brought up to the company’s board repeatedly from 2015-2019, but little changed.

This had consequences beyond the most recent hack. Last year, the Department of Justice announced charges against two former employees (a U.S. national and a Saudi citizen) that it accused of espionage on behalf of an individual close to Saudi Crown Prince Mohammed bin Salman. The DOJ alleged that the intent of the operation was to gain access to private information on political dissidents.

Source: Twitter Contractors Abused Access to Beyoncé’s Account: Report

EU demands strange concessions from Google over Fitbit deal – wants to share movement data to third parties

The EU has demanded that Google make major concessions relating to its $2.1 billion acquisition of fitness-tracking company Fitbit if the deal is to be allowed to proceed imminently, according to people with direct knowledge of the discussions.

Since it was announced last November, the acquisition has faced steep opposition from consumer groups and regulators, who have raised concerns over the effect of Google’s access to Fitbit’s health data on competition.

EU regulators now want the company to pledge that it will not use that information to “further enhance its search advantage” and that it will grant third parties equal access to it, these people said.

The move comes days after the EU regulators suffered a major blow in Luxembourg, losing a landmark case that would have forced Apple to pay back €14.3 billion in taxes to Ireland.

Brussels insiders said that a refusal by Google to comply with the new demands would probably result in a protracted investigation, adding that such a scenario could ultimately leave the EU at a disadvantage.

“It is like a poker game,” said a person following the case closely. “In a lengthy probe, the commission risks having fewer or no pledges and still having to clear the deal.”

They added that the discussions over the acquisition were “intense,” and there was no guarantee that any agreement between Brussels and Google would be reached.

Google had previously promised it would not use Fitbit’s health data to improve its own advertising, but according to Brussels insiders, the commitment was not sufficient to assuage the EU’s concerns nor those of US regulators also examining the deal.

Source: EU demands major concessions from Google over Fitbit deal | Ars Technica

Uhmmm so they want everybody to have access to this extremely private data?

Instagram and 50 other apps found that quietly access iOS device’s camera

Apple’s iOS 14 beta has proven surprisingly handy at sussing out what apps are snooping on your phone’s data. It ratted out LinkedIn, Reddit, and TikTok for secretly copying clipboard content earlier this month, and now Instagram’s in hot water after several users reported that their camera’s “in use” indicator stays on even when they’re just scrolling through their Instagram feed.

According to reports shared on social media by users with the iOS 14 beta installed, the green “camera on” indicator would pop up when they used the app even when they weren’t taking photos or recording videos. If this sounds like deja vu, that’s because Instagram’s parent company, Facebook, had to fix a similar issue with its iOS app last year when users found their device’s camera would quietly activate in the background without their permission while using Facebook.

In an interview with the Verge, an Instagram spokesperson called this issue a bug that the company’s currently working to patch.

[…]

Even though iOS 14 is still in beta mode and its privacy features aren’t yet available to the general public, it’s already raised plenty of red flags about apps snooping on your data. Though TikTok, LinkedIn, and Reddit may have been the most high-profile examples, researchers Talal Haj Bakry and Tommy Mysk found more than 50 iOS apps quietly accessing users’ clipboards as well. And while there are certainly more malicious breaches of privacy, these kinds of discoveries are a worrying reminder about how much we risk every time we go online.

Source: Instagram to fix bug that quietly accesses iOS device’s camera

Facebook settles unauthorised use of facial recognition for $650 million

Facebook has agreed to pay a total of $650 million in a landmark class action lawsuit over the company’s unauthorized use of facial recognition, a new court filing shows.

The filing represents a revised settlement that increases the total payout by $100 million and comes after a federal judge balked at the original proposal on the grounds it did not adequately punish Facebook.

The settlement covers any Facebook user in Illinois whose picture appeared on the site after 2011. According to the new document, those users can each expect to receive between $200 and $400 depending on how many people file a claim.

The case represents one of the biggest payouts for privacy violations to date, and contrasts sharply with other settlements such as that for the notorious data breach at Equifax—for which victims are expected to received almost nothing.

The Facebook lawsuit came about as a result of a unique state law in Illinois, which obliges companies to get permission before using facial recognition technology on their customers.

The law has ensnared not just Facebook, but also the likes of Google and photo service Shutterfly. The companies had insisted in court that the law did not apply to their activities, and lobbied the Illinois legislature to rule they were exempt, but these efforts fell short.

The final Facebook settlement is likely to be approved later this year, meaning Illinois residents will be poised to collect a payout in 2021.

The judge overseeing the settlement rejected the initial proposal in June on the grounds that the Illinois law provides penalties of $5,000, meaning Facebook could have been obliged to pay $47 billion—an amount far exceeding what the company agreed to pay under the settlement.

“We are focused on settling as it is in the best interest of our community and our shareholders to move past this matter,” said a Facebook spokesperson.

Edelson PC, the law firm representing the plaintiffs, declined to comment on the revised deal.

Source: Facebook adds $100 million to facial recognition settlement | Fortune

Amazon’s auditing of Alexa Skills is so good, these boffins got all 200+ rule-breaking apps past the reviewers

Amazon claims it reviews the software created by third-party developers for its Alexa voice assistant platform, yet US academics were able to create more than 200 policy-violating Alexa Skills and get them certified.

In a paper [PDF] presented at the US Federal Trade Commission’s PrivacyCon 2020 event this week, Clemson University researchers Long Cheng, Christin Wilson, Song Liao, Jeffrey Alan Young, Daniel Dong, and Hongxin Hu describe the ineffectiveness of Amazon’s Skills approval process.

The researchers have also set up a website to present their findings.

Like Android and iOS apps, Alexa Skills have to be submitted for review before they’re available to be used with Amazon’s Alexa service. Also like Android and iOS, the Amazon’s review process sometimes misses rule-breaking code.

In the researchers’ test, sometimes was every time: The e-commerce giant’s review system granted approval for every one of 234 rule-flouting Skills submitted over a 12-month period.

“Surprisingly, the certification process is not implemented in a proper and effective manner, as opposed to what is claimed that ‘policy-violating skills will be rejected or suspended,'” the paper says. “Second, vulnerable skills exist in Amazon’s skills store, and thus users (children, in particular) are at risk when using [voice assistant] services.”

Amazon disputes some of the findings and suggests that the way the research was done skewed the results by removing rule-breaking Skills after certification, but before other systems like post-certification audits might have caught the offending voice assistant code.

The devil is in the details

Alexa hardware has been hijacked by security researchers for eavesdropping and the software on these devices poses similar security risks, but the research paper concerns itself specifically with content in Alexa Skills that violates Amazon’s rules.

Alexa content prohibitions include limitations on activities like collecting information from children, collecting health information, sexually explicit content, descriptions of graphic violence, self-harm instructions, references to Nazis or hate symbols, hate speech, the promotion drugs, terrorism, or other illegal activities, and so on.

Getting around these rules involved tactics like adding a counter to Skill code, so the app only starts spewing hate speech after several sessions. The paper cites a range of problems with the way Amazon reviews Skills, including inconsistencies where rejected content gets accepted after resubmission, vetting tools that can’t recognize cloned code submitted by multiple developer accounts, excessive trust in developers, and negligence in spotting data harvesting even when the violations are made obvious.

Amazon also does not require developers to re-certify their Skills if the backend code – run on developers’ servers – changes. It’s thus possible for Skills to turn malicious if the developer alters the backend code or an attacker compromises a well-intentioned developer’s server.

As part of the project, the researchers also examined 825 published Skills for kids that either had a privacy policy or a negative review. Among these, 52 had policy violations. Negative comments by users mention unexpected advertisements, inappropriate language, and efforts to collect personal information.

Source: Amazon’s auditing of Alexa Skills is so good, these boffins got all 200+ rule-breaking apps past the reviewers • The Register

Firefox on Android: Camera remains active when phone is locked or the user switches apps after streaming

Mozilla says it’s working on fixing a bug in Firefox for Android that keeps the smartphone camera active even after users have moved the browser in the background or the phone screen was locked.

A Mozilla spokesperson told ZDNet in an email this week that a fix is expected for later this year in October.

The bug was first spotted and reported to Mozilla a year ago, in July 2019, by an employee of video delivery platform Appear TV.

The bug manifests when users chose to video stream from a website loaded in Firefox instead of a native app.

Mobile users often choose to stream from a mobile browser for privacy reasons, such as not wanting to install an intrusive app and grant it unfettered access to their smartphone’s data. Mobile browsers are better because they prevent websites from accessing smartphone data, keeping their data collection to a minimum.

The Appear TV developer noticed that Firefox video streams kept going, even in situations when they should have normally stopped.

While this raises issues with streams continuing to consume the user’s bandwidth, the bug was also deemed a major privacy issue as Firefox would continue to stream from the user’s device in situations where the user expected privacy by switching to another app or locking the device.

“From our analysis, a website is allowed to retain access to your camera or microphone whilst you’re using other apps, or even if the phone is locked,” a spokesperson for Traced, a privacy app, told ZDNet, after alerting us to the issue.

“While there are times you might want the microphone or video to keep working in the background, your camera should never record you when your phone is locked,” Traced added.

Source: Firefox on Android: Camera remains active when phone is locked or the user switches apps | ZDNet

Mozilla offers trusted VPN services – good timing!

Starting today, there’s a VPN on the market from a company you trust. The Mozilla VPN (Virtual Private Network) is now available on Windows and Android devices. This fast and easy-to-use VPN service is brought to you by Mozilla, the makers of Firefox, and a trusted name in online consumer security and privacy services.

See for yourself how the Mozilla VPN works:

 

The first thing you may notice when you install the Mozilla VPN is how fast your browsing experience is. That’s because the Mozilla VPN is based on modern and lean technology, the WireGuard protocol’s 4,000 lines of code, is a fraction in size of legacy protocols used by other VPN service providers.

You will also see an easy-to-use and simple interface for anyone who is new to VPN, or those who want to set it and get onto the web.

With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month and will initially be available in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand, with plans to expand to other countries this Fall.

Source: Mozilla Puts Its Trusted Stamp on VPN – The Mozilla Blog

Especially after 7 no logs VPN services just dumped millions of lines of logs with very very personal information in them

E.U. Court Invalidates Data-Sharing Agreement With U.S.

The European Union’s top court ruled Thursday that an agreement that allows big tech companies to transfer data to the United States is invalid, and that national regulators need to take tougher action to protect the privacy of users’ data.

The ruling does not mean an immediate halt to all data transfers outside the EU, as there is another legal mechanism that some companies can use. But it means that the scrutiny over data transfers will be ramped up and that the EU and U.S. may have to find a new system that guarantees that Europeans’ data is afforded the same privacy protection in the U.S. as it is in the EU.

The case began after former U.S. National Security Agency contractor Edward Snowden revealed in 2013 that the American government was snooping on people’s online data and communications. The revelations included detail on how Facebook gave U.S. security agencies access to the personal data of Europeans.

Austrian activist and law student Max Schrems that year filed a complaint against Facebook, which has its EU base in Ireland, arguing that personal data should not be sent to the U.S., as many companies do, because the data protection is not as strong as in Europe. The EU has some of the toughest data privacy rules under a system known as GDPR.

Source: E.U. Court Invalidates Data-Sharing Agreement With U.S. | Time

Google faces lawsuit over tracking in apps even when users opted out

Google records what people are doing on hundreds of thousands of mobile apps even when they follow the company’s recommended settings for stopping such monitoring, a lawsuit seeking class action status alleged on Tuesday.

The data privacy lawsuit is the second filed in as many months against Google by the law firm Boies Schiller Flexner on behalf a handful of individual consumers.

[…]

The new complaint in a U.S. district court in San Jose accuses Google of violating federal wiretap law and California privacy law by logging what users are looking at in news, ride-hailing and other types of apps despite them having turned off “Web & App Activity” tracking in their Google account settings.

The lawsuit alleges the data collection happens through Google’s Firebase, a set of software popular among app makers for storing data, delivering notifications and ads, and tracking glitches and clicks. Firebase typically operates inside apps invisibly to consumers.

“Even when consumers follow Google’s own instructions and turn off ‘Web & App Activity’ tracking on their ‘Privacy Controls,’ Google nevertheless continues to intercept consumers’ app usage and app browsing communications and personal information,” the lawsuit contends.

Google uses some Firebase data to improve its products and personalize ads and other content for consumers, according to the lawsuit.

Reuters reported in March that U.S. antitrust investigators are looking into whether Google has unlawfully stifled competition in advertising and other businesses by effectively making Firebase unavoidable.

In its case last month, Boies Schiller Flexner accused Google of surreptitiously recording Chrome browser users’ activity even when they activated what Google calls Incognito mode. Google said it would fight the claim.

Source: Google faces lawsuit over tracking in apps even when users opted out – Reuters

The days of “Do No Evil” are long past

Only 9% of visitors give GDPR consent to be tracked

Most GDPR consent banner implementations are deliberately engineered to be difficult to use and are full of dark patterns that are illegal according to the law.

I wanted to find out how many visitors would engage with a GDPR banner if it were implemented properly and how many would grant consent to their information being collected and shared.

[…]

If you implement a proper GDPR consent banner, a vast majority of visitors will most probably decline to give you consent. 91% to be exact out of 19,000 visitors in my study.

What’s a proper and legal implementation of a GDPR banner?

  • It’s a banner that doesn’t take much space
  • It allows people to browse your site even when ignoring the banner
  • It’s a banner that allows visitors to say “no” just as easy as they can say “yes”

[…]

Source: Only 9% of visitors give GDPR consent to be tracked

Uncovered: 1,000 phrases that incorrectly trigger Alexa, Siri, and Google Assistant

As Alexa, Google Home, Siri, and other voice assistants have become fixtures in millions of homes, privacy advocates have grown concerned that their near-constant listening to nearby conversations could pose more risk than benefit to users. New research suggests the privacy threat may be greater than previously thought.

The findings demonstrate how common it is for dialog in TV shows and other sources to produce false triggers that cause the devices to turn on, sometimes sending nearby sounds to Amazon, Apple, Google, or other manufacturers. In all, researchers uncovered more than 1,000 word sequences—including those from Game of Thrones, Modern Family, House of Cards, and news broadcasts—that incorrectly trigger the devices.

“The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans,” one of the researchers, Dorothea Kolossa, said. “Therefore, they are more likely to start up once too often rather than not at all.”

That which must not be said

Examples of words or word sequences that provide false triggers include

  • Alexa: “unacceptable,” “election,” and “a letter”
  • Google Home: “OK, cool,” and “Okay, who is reading”
  • Siri: “a city” and “hey jerry”
  • Microsoft Cortana: “Montana”

The two videos below show a GoT character saying “a letter” and Modern Family character uttering “hey Jerry” and activating Alexa and Siri, respectively.

Accidental Trigger #1 – Alexa – Cloud
Accidental Trigger #3 – Hey Siri – Cloud

In both cases, the phrases activate the device locally, where algorithms analyze the phrases; after mistakenly concluding that these are likely a wake word, the devices then send the audio to remote servers where more robust checking mechanisms also mistake the words for wake terms. In other cases, the words or phrases trick only the local wake word detection but not algorithms in the cloud.

Unacceptable privacy intrusion

When devices wake, the researchers said, they record a portion of what’s said and transmit it to the manufacturer. The audio may then be transcribed and checked by employees in an attempt to improve word recognition. The result: fragments of potentially private conversations can end up in the company logs.

The risk to privacy isn’t solely theoretical. In 2016, law enforcement authorities investigating a murder subpoenaed Amazon for Alexa data transmitted in the moments leading up to the crime. Last year, The Guardian reported that Apple employees sometimes transcribe sensitive conversations overheard by Siri. They include private discussions between doctors and patients, business deals, seemingly criminal dealings, and sexual encounters.

The research paper, titled “Unacceptable, where is my privacy?,” is the product of Lea Schönherr, Maximilian Golla, Jan Wiele, Thorsten Eisenhofer, Dorothea Kolossa, and Thorsten Holz of Ruhr University Bochum and Max Planck Institute for Security and Privacy. In a brief write-up of the findings, they wrote:

Our setup was able to identify more than 1,000 sequences that incorrectly trigger smart speakers. For example, we found that depending on the pronunciation, «Alexa» reacts to the words “unacceptable” and “election,” while «Google» often triggers to “OK, cool.” «Siri» can be fooled by “a city,” «Cortana» by “Montana,” «Computer» by “Peter,” «Amazon» by “and the zone,” and «Echo» by “tobacco.” See videos with examples of such accidental triggers here.

In our paper, we analyze a diverse set of audio sources, explore gender and language biases, and measure the reproducibility of the identified triggers. To better understand accidental triggers, we describe a method to craft them artificially. By reverse-engineering the communication channel of an Amazon Echo, we are able to provide novel insights on how commercial companies deal with such problematic triggers in practice. Finally, we analyze the privacy implications of accidental triggers and discuss potential mechanisms to improve the privacy of smart speakers.

The researchers analyzed voice assistants from Amazon, Apple, Google, Microsoft, and Deutsche Telekom, as well as three Chinese models by Xiaomi, Baidu, and Tencent. Results published on Tuesday focused on the first four. Representatives from Apple, Google, and Microsoft didn’t immediately respond to a request for comment.

The full paper hasn’t yet been published, and the researchers declined to provide a copy ahead of schedule. The general findings, however, already provide further evidence that voice assistants can intrude on users’ privacy even when people don’t think their devices are listening. For those concerned about the issue, it may make sense to keep voice assistants unplugged, turned off, or blocked from listening except when needed—or to forgo using them at all.

Source: Uncovered: 1,000 phrases that incorrectly trigger Alexa, Siri, and Google Assistant | Ars Technica

Zoom misses its own deadline to publish its first transparency report

How many government demands for user data has Zoom received? We won’t know until “later this year,” an updated Zoom blog post now says.

The video conferencing giant previously said it would release the number of government demands it has received by June 30. But the company said it’s missed that target and has given no firm new date for releasing the figures.

It comes amid heightened scrutiny of the service after a number of security issues and privacy concerns came to light following a massive spike in its user base, thanks to millions working from home because of the coronavirus pandemic.

In a blog post today reflecting on the company’s turnaround efforts, chief executive Eric Yuan said the company has “made significant progress defining the framework and approach for a transparency report that details information related to requests Zoom receives for data, records or content.”

“We look forward to providing the fiscal [second quarter] data in our first report later this year,” he said.

Transparency reports offer rare insights into the number of demands or requests a company gets from the government for user data. These reports are not mandatory, but are important to understand the scale and scope of government surveillance.

Zoom said last month it would launch its first transparency report after the company admitted it briefly suspended the Zoom accounts of two U.S.-based accounts and one Hong Kong activist at the request of the Chinese government. The users, who were not based in China, held a Zoom call commemorating the anniversary of the Tiananmen Square massacre, an event that’s cloaked in secrecy and censorship in mainland China.

Source: Zoom misses its own deadline to publish its first transparency report | TechCrunch