Governments, Apple, Google spying on users through push notifications – they all go through Apple and Google servers (unencrypted?)!

In a letter to the Department of Justice, Senator Ron Wyden said foreign officials were demanding the data from Alphabet’s (GOOGL.O) Google and Apple (AAPL.O). Although details were sparse, the letter lays out yet another path by which governments can track smartphones.

Apps of all kinds rely on push notifications to alert smartphone users to incoming messages, breaking news, and other updates. These are the audible “dings” or visual indicators users get when they receive an email or their sports team wins a game. What users often do not realize is that almost all such notifications travel over Google and Apple’s servers.

That gives the two companies unique insight into the traffic flowing from those apps to their users, and in turn puts them “in a unique position to facilitate government surveillance of how users are using particular apps,” Wyden said. He asked the Department of Justice to “repeal or modify any policies” that hindered public discussions of push notification spying.

In a statement, Apple said that Wyden’s letter gave them the opening they needed to share more details with the public about how governments monitored push notifications.

“In this case, the federal government prohibited us from sharing any information,” the company said in a statement. “Now that this method has become public we are updating our transparency reporting to detail these kinds of requests.”

Google said that it shared Wyden’s “commitment to keeping users informed about these requests.”

The Department of Justice did not return messages seeking comment on the push notification surveillance or whether it had prevented Apple of Google from talking about it.

Wyden’s letter cited a “tip” as the source of the information about the surveillance. His staff did not elaborate on the tip, but a source familiar with the matter confirmed that both foreign and U.S. government agencies have been asking Apple and Google for metadata related to push notifications to, for example, help tie anonymous users of messaging apps to specific Apple or Google accounts.

The source declined to identify the foreign governments involved in making the requests but described them as democracies allied to the United States.

The source said they did not know how long such information had been gathered in that way.

Most users give push notifications little thought, but they have occasionally attracted attention from technologists because of the difficulty of deploying them without sending data to Google or Apple.

Earlier this year French developer David Libeau said users and developers were often unaware of how their apps emitted data to the U.S. tech giants via push notifications, calling them “a privacy nightmare.”

Source: Governments spying on Apple, Google users through push notifications – US senator | Reuters

Alternative browsers about to die? Firefox may soon be delisted in the US govt support matrix :'(

A somewhat obscure guideline for developers of U.S. government websites may be about to accelerate the long, sad decline of Mozilla’s Firefox browser. There already are plenty of large entities, both public and private, whose websites lack proper support for Firefox; and that will get only worse in the near future, because the ’fox’s auburn paws are perilously close to the lip of the proverbial slippery slope.

The U.S. Web Design System (USWDS) provides a comprehensive set of standards which guide those who build the U.S. government’s many websites. Its documentation for developers borrows a “2% rule” from its British counterpart:

. . . we officially support any browser above 2% usage as observed by analytics.usa.gov.

At this writing, that analytics page shows the following browser traffic for the previous ninety days:

BrowserShare
Chrome49%
Safari34.8%
Edge8.4%
Firefox2.2%
Safari (in-app)1.9%
Samsung Internet1.6%
Android Webview1%
Other1%

I am personally unaware of any serious reason to believe that Firefox’s numbers will improve soon. Indeed, for the web as a whole, they’ve been declining consistently for years, as this chart shows:

Chart of browser share for January, 2009, through November, 2023

Chrome vs. Firefox vs. Safari for January, 2009, through November, 2023.
Image: StatCounter.

Firefox peaked at 31.82% in November, 2009 — and then began its long slide in almost direct proportion to the rise of Chrome. The latter shot from 1.37% use in January, 2009, to its own peak of 66.34% in September, 2020, since falling back to a “measly” 62.85% in the very latest data.1

While these numbers reflect worldwide trends, the U.S.-specific picture isn’t really better. In fact, because the iPhone is so popular in the U.S. — which is obvious from what you see on that aforementioned government analytics page — Safari pulls large numbers that also hurt Firefox.

[…]

Firefox is quickly losing “web space,” thanks to a perfect storm that’s been kicked up by the dominance of Chrome, the popularity of mobile devices that run Safari by default, and many corporate and government IT shops’ insistence that their users rely on only Microsoft’s Chromium-based Edge browser while toiling away each day.

With such a continuing free-fall, Firefox is inevitably nearing the point where USWDS will remove it, like Internet Explorer before it, from the list of supported browsers.

[…]

Source: Firefox on the brink? The Big Three may effectively be down to a Big Two, and right quick.

Competition is important, especially in the world of browsers, which are our window into far and away most of the internet. Allowing one browser to rule them all leads to some very strange and nasty stuff. Not only do they no longer follow (W3C) standards (which IE and Chrome didn’t and don’t), but they start taking extreme liberties with your privacy (a “privacy sandbox” that allows any site to query all your habits!), pick on certain websites and even edit what you see, send your passwords and other personal data to third party sites, share your motion data, refuse to delete private data on you, etc etc etc

Firefox is a very good browser with some awesome addons – and not beholden to the Google or Microsoft or Apple overlords. And it’s the only private one offering you a real choice outside of the Chromium reach.

If Creators Suing AI Companies Over Copyright Insanley Win, It Will Further Entrench Big Tech

There’s been this weird idea lately, even among people who used to recognize that copyright only empowers the largest gatekeepers, that in the AI world we have to magically flip the script on copyright and use it as a tool to get AI companies to pay for the material they train on. But, as we’ve explained repeatedly, this would be a huge mistake. Even if people are concerned about how AI works, copyright is not the right tool to use here, and the risk of it being used to destroy all sorts of important and useful tools is quite high (ignoring Elon Musk’s prediction that “Digital God” will obsolete all of this).

However, because so many people think that they’re supporting creators and “sticking it” to Big Tech in supporting these copyright lawsuits over AI, I thought it might be useful to play out how this would work in practice. And, spoiler alert, the end result would be a disaster for creators, and a huge benefit to big tech. It’s exactly what we should be fighting against.

And, we know this because we have decades of copyright law and the internet to observe. Copyright law, by its very nature as a monopoly right, has always served the interests of gatekeepers over artists. This is why the most aggressive enforcers of copyright are the very middlemen with long histories of screwing over the actual creatives: the record labels, the TV and movie studios, the book publishers, etc.

This is because the nature of copyright law is such that it is most powerful when a few large entities act as central repositories for the copyrights and can lord around their power and try to force other entities to pay up. This is how the music industry has worked for years, and you can see what’s happened. After years of fighting internet music, it finally devolved into a situation where there are a tiny number of online music services (Spotify, Apple, YouTube, etc.) who cut massive deals with the giant gatekeepers on the other side (the record labels, the performance rights orgs, the collection societies) while the actual creators get pennies.

This is why we’ve said that AI training will never fit neatly into a licensing regime. The almost certain outcome (because it’s what happens every other time a similar situation arises) is that there will be one (possibly two) giant entities who will be designated as the “collection society” with whom AI companies will have to negotiate or to just purchase a “training license” and that entity will then collect a ton of money, much of which will go towards “administration,” and actual artists will… get a tiny bit.

And, because of the nature of training data, which only needs to be collected once, it’s not likely that this will be a recurring payment, but a minuscule one-off for the right to train on the data.

But, given the enormity of the amount of content, and the structure of this kind of thing, the cost will be extremely high for the AI companies (a few pennies for every creator online can add up in aggregate), meaning that only the biggest of big tech will be able to afford it.

In other words, the end result of a win in this kind of litigation (or, if Congress decides to act to achieve something similar) would be the further locking-in of the biggest companies. Google, Meta, and OpenAI (with Microsoft’s money) can afford the license, and will toss off a tiny one-time payment to creators (while whatever collection society there is takes a big cut for administration).

And then all of the actually interesting smaller companies and open source models are screwed.

End result? More lock-in of the biggest of big tech in exchange for… a few pennies for creators?

That’s not a beneficial outcome. It’s a horrible outcome. It will not just limit innovation, but it will massively limit competition and provide an even bigger benefit to the biggest incumbents.

Source: If Creators Suing AI Companies Over Copyright Win, It Will Further Entrench Big Tech | Techdirt

Automakers’ data privacy practices “are unacceptable,” says US senator

US Senator Edward Markey (D-Mass.) is one of the more technologically engaged of our elected lawmakers. And like many technologically engaged Ars Technica readers, he does not like what he sees in terms of automakers’ approach to data privacy. On Friday, Sen. Markey wrote to 14 car companies with a variety of questions about data privacy policies, urging them to do better.

As Ars reported in September, the Mozilla Foundation published a scathing report on the subject of data privacy and automakers. The problems were widespread—most automakers collect too much personal data and are too eager to sell or share it with third parties, the foundation found.

Markey noted the Mozilla Foundation report in his letters, which were sent to BMW, Ford, General Motors, Honda, Hyundai, Kia, Mazda, Mercedes-Benz, Nissan, Stellantis, Subaru, Tesla, Toyota, and Volkswagen. The senator is concerned about the large amounts of data that modern cars can collect, including the troubling potential to use biometric data (like the rate a driver blinks and breathes, as well as their pulse) to infer mood or mental health.

Sen. Markey is also worried about automakers’ use of Bluetooth, which he said has expanded “their surveillance to include information that has nothing to do with a vehicle’s operation, such as data from smartphones that are wirelessly connected to the vehicle.”

“These practices are unacceptable,” Markey wrote. “Although certain data collection and sharing practices may have real benefits, consumers should not be subject to a massive data collection apparatus, with any disclosures hidden in pages-long privacy policies filled with legalese. Cars should not—and cannot—become yet another venue where privacy takes a backseat.”

The 14 automakers have until December 21 to answer the following questions:

  • Does your company collect user data from its vehicles, including but not limited to the actions, behaviors, or personal information of any owner or user?
    • If so, please describe how your company uses data about owners and users collected from its vehicles. Please distinguish between data collected from users of your vehicles and data collected from those who sign up for additional services.
    • Please identify every source of data collection in your new model vehicles, including each type of sensor, interface, or point of collection from the individual and the purpose of that data collection.
    • Does your company collect more information than is needed to operate the vehicle and the services to which the individual consents?
    • Does your company collect information from passengers or people outside the vehicle? If so, what information and for what purposes?
    • Does your company sell, transfer, share, or otherwise derive commercial benefit from data collected from its vehicles to third parties? If so, how much did third parties pay your company in 2022 for that data?
    • Once your company collects this user data, does it perform any categorization or standardization procedures to group the data and make it readily accessible for third-party use?
    • Does your company use this user data, or data on the user acquired from other sources, to create user profiles of any sort?
    • How does your company store and transmit different types of data collected on the vehicle? Do your company’s vehicles include a cellular connection or Wi-Fi capabilities for transmitting data from the vehicle?
  • Does your company provide notice to vehicle owners or users of its data practices?
  • Does your company provide owners or users an opportunity to exercise consent with respect to data collection in its vehicles?
    • If so, please describe the process by which a user is able to exercise consent with respect to such data collection. If not, why not?
    • If users are provided with an opportunity to exercise consent to your company’s services, what percentage of users do so?
    • Do users lose any vehicle functionality by opting out of or refusing to opt in to data collection? If so, does the user lose access only to features that strictly require such data collection, or does your company disable features that could otherwise operate without that data collection?
  • Can all users, regardless of where they reside, request the deletion of their data? If so, please describe the process through which a user may delete their data. If not, why not?
  • Does your company take steps to anonymize user data when it is used for its own purposes, shared with service providers, or shared with non-service provider third parties? If so, please describe your company’s process for anonymizing user data, including any contractual restrictions on re-identification that your company imposes.
  • Does your company have any privacy standards or contractual restrictions for the third-party software it integrates into its vehicles, such as infotainment apps or operating systems? If so, please provide them. If not, why not?
  • Please describe your company’s security practices, data minimization procedures, and standards in the storage of user data.
    • Has your company suffered a leak, breach, or hack within the last ten years in which user data was compromised?
    • If so, please detail the event(s), including the nature of your company’s system that was exploited, the type and volume of data affected, and whether and how your company notified its impacted users.
    • Is all the personal data stored on your company’s vehicles encrypted? If not, what personal data is left open and unprotected? What steps can consumers take to limit this open storage of their personal information on their cars?
  • Has your company ever provided to law enforcement personal information collected by a vehicle?
    • If so, please identify the number and types of requests that law enforcement agencies have submitted and the number of times your company has complied with those requests.
    • Does your company provide that information only in response to a subpoena, warrant, or court order? If not, why not?
  • Does your company notify the vehicle owner when it complies with a request?

Source: Automakers’ data privacy practices “are unacceptable,” says US senator | Ars Technica

The UK tries, once again, to age-gate pornography, keep a list of porn watchers

UK telecoms regulator Ofcom has laid out how porn sites could verify users’ ages under the newly passed Online Safety Act. Although the law gives sites the choice of how they keep out underage users, the regulator is publishing a list of measures they’ll be able to use to comply. These include having a bank or mobile network confirm that a user is at least 18 years old (with that user’s consent) or asking a user to supply valid details for a credit card that’s only available to people who are 18 and older. The regulator is consulting on these guidelines starting today and hopes to finalize its official guidance in roughly a year’s time.

The measures have the potential to be contentious and come a little over four years after the UK government scrapped its last attempt to mandate age verification for pornography. Critics raised numerous privacy and technical concerns with the previous approach, and the plans were eventually shelved with the hope that the Online Safety Act (then emerging as the Online Harms White Paper) would offer a better way forward. Now we’re going to see if that’s true, or if the British government was just kicking the can down the road.

[…]

Ofcom lists six age verification methods in today’s draft guidelines. As well as turning to banks, mobile networks, and credit cards, other suggested measures include asking users to upload photo ID like a driver’s license or passport, or for sites to use “facial age estimation” technology to analyze a person’s face to determine that they’ve turned 18. Simply asking a site visitor to declare that they’re an adult won’t be considered strict enough.

Once the duties come into force, pornography sites will be able to choose from Ofcom’s approaches or implement their own age verification measures so long as they’re deemed to hit the “highly effective” bar demanded by the Online Safety Act. The regulator will work with larger sites directly and keep tabs on smaller sites by listening to complaints, monitoring media coverage, and working with frontline services. Noncompliance with the Online Safety Act can be punished with fines of up to £18 million (around $22.7 million) or 10 percent of global revenue (whichever is higher).

[…]

“It is very concerning that Ofcom is solely relying upon data protection laws and the ICO to ensure that privacy will be protected,” ORG program manager Abigail Burke said in a statement. “The Data Protection and Digital Information Bill, which is progressing through parliament, will seriously weaken our current data protection laws, which are in any case insufficient for a scheme this intrusive.”

“Age verification technologies for pornography risk sensitive personal data being breached, collected, shared, or sold. The potential consequences of data being leaked are catastrophic and could include blackmail, fraud, relationship damage, and the outing of people’s sexual preferences in very vulnerable circumstances,” Burke said, and called for Ofcom to set out clearer standards for protecting user data.

There’s also the risk that any age verification implemented will end up being bypassed by anyone with access to a VPN.

[…]

Source: The UK tries, once again, to age-gate pornography – The Verge

1. Age verification doesn’t work

2. Age verification doesn’t work

3. Age verification doesn’t work

4. Really, having to register as a porn watcher and then have your name in a leaky database?!

Web browser suspended because it can browse the web is back on Google Play after being taken down by incomplete DMCA

Google Play has reversed its latest ban on a web browser that keeps getting targeted by vague Digital Millennium Copyright Act (DMCA) notices. Downloader, an Android TV app that combines a browser with a file manager, was restored to Google Play last night.

Downloader, made by app developer Elias Saba, was suspended on Sunday after a DMCA notice submitted by copyright-enforcement firm MarkScan on behalf of Warner Bros. Discovery. It was the second time in six months that Downloader was suspended based on a complaint that the app’s web browser is capable of loading websites.

The first suspension in May lasted three weeks, but Google reversed the latest one much more quickly. As we wrote on Monday, the MarkScan DMCA notice didn’t even list any copyrighted works that Downloader supposedly infringed upon.

Instead of identifying specific copyrighted works, the MarkScan notice said only that Downloader infringed on “Properties of Warner Bros. Discovery Inc.” In the field where a DMCA complainant is supposed to provide an example of where someone can view an authorized example of the work, MarkScan simply entered the main Warner Bros. URL: https://www.warnerbros.com/.

DMCA notice was incomplete

Google has defended its DMCA-takedown process by saying that, under the law, it is obligated to remove any content when a takedown request contains the elements required by the copyright law. But in this case, Google Play removed Downloader even though the DMCA takedown request didn’t identify a copyrighted work—one of the elements required by the DMCA.

[…]

Downloader’s first suspension in May came after several Israeli TV companies complained that the app could be used to load a pirate website. In that case, an appeal that Saba filed with Google Play was quickly rejected. He also submitted a DMCA counter-notice, which gave the complainant 10 business days to file a legal action.

[…]

Saba still needed to republish the app to make it visible to users again. “I re-submitted the app last night in the Google Play Console, as instructed in the email, and it was approved and live a few hours later,” Saba told Ars today.

In a new blog post, Saba wrote that he expected the second suspension to last a few weeks, just like the first did. He speculated that it was reversed more quickly this time because the latest DMCA notice “provided no details as to how my app was infringing on copyrighted content, which, I believe, allowed Google to invalidate the takedown request.”

“Of course, I wish Google bothered to toss out the meritless DMCA takedown request when it was first submitted, as opposed to after taking ‘another look,’ but I understand that Google is probably flooded with invalid takedown requests because the DMCA is flawed,” Saba wrote. “I’m just glad Google stepped in when it did and I didn’t have to go through the entire DMCA counter notice process. The real blame for all of this goes to Warner Bros. Discovery and other corporations for funding companies like MarkScan which has issued DMCA takedowns in the tens of millions.”

Source: Web browser suspended because it can browse the web is back on Google Play | Ars Technica

DMCA is an absolute horror of a system that is an incredibly and unfixably broken “solution” to corporate greed

FBI Director Admits Agency Rarely Has Probable Cause When It Performs Backdoor Searches Of NSA Collections

After years of continuous, unrepentant abuse of surveillance powers, the FBI is facing the real possibility of seeing Section 702 curtailed, if not scuttled entirely.

Section 702 allows the NSA to gather foreign communications in bulk. The FBI benefits from this collection by being allowed to perform “backdoor” searches of NSA collections to obtain communications originating from US citizens and residents.

There are rules to follow, of course. But the FBI has shown little interest in adhering to these rules, just as much as the NSA has shown little interest in curtailing the amount of US persons’ communications “incidentally” collected by its dragnet.

[…]

Somehow, the FBI director managed to blurt out what everyone was already thinking: that the FBI needs this backdoor access because it almost never has the probable cause to support the search warrant normally needed to access the content of US persons’ communications.

A warrant requirement would amount to a de facto ban, because query applications either would not meet the legal standard to win court approval; or because, when the standard could be met, it would be so only after the expenditure of scarce resources, the submission and review of a lengthy legal filing, and the passage of significant time — which, in the world of rapidly evolving threats, the government often does not have,” Wray said. 

Holy shit. He just flat-out admitted it: a majority of FBI searches of US persons’ communications via Section 702 are unsupported by probable cause

[…]

Unfortunately, both the FBI and the current administration are united in their desire to keep this executive authority intact. Both Wray and the Biden administration call the warrant requirement a “red line.” So, even if the House decides it needs to go (for mostly political reasons) and/or Wyden’s reform bill lands on the President’s desk, odds are the FBI will get its wish: warrantless access to domestic communications for the foreseeable future.

Source: FBI Director Admits Agency Rarely Has Probable Cause When It Performs Backdoor Searches Of NSA Collections | Techdirt

Copyright Bot Can’t Tell The Difference Between Star Trek Ship And Adult Film Actress

Given that the overwhelming majority of DMCA takedown notices are generated by copyright bots that are only moderately good at their job, at best, perhaps it’s not terribly surprising that these bots keep finding new and interesting ways to cause collateral damage unintentionally.

[…]

a Tumblr site, called Mapping La Sirena.” If you’re a fan of Star Trek: Picard, you will know that’s the name of the main starship in that series. But if you’re a copyright enforcer for a certain industry, the bots you’ve set up for yourself apparently aren’t programmed with Star Trek fandom.

Transparency.automattic reports Tumblr has received numerous DMCA takedown notices from DMCA Piracy Prevention Inc, a third-party copyright monitoring service used frequently by content creators to prevent infringement of their original work. And these complaints occurred all because of the name La Sirena which also happens to be the name of an adult content creator, La Sirena 69 who is one of Piracy Prevention’s customers.

In one copyright claim over 90 Tumblr posts were targeted by the monitoring service because of the keyword match to “la sirena.” But instead of Automattic being alerted to La Sirena 69’s potentially infringed content, the company reported many of mappinglasirena.tumblr.com’s original posts.

Pure collateral damage. While not intentional per se, this is obviously still a problem. One of two things has to be the case: either we stop allowing copyright enforcement to be farmed out to a bunch of dumb bots that suck at their jobs or we insist that the bots stop sucking, which ain’t going to happen anytime soon. What cannot be allowed to happen is to shrug this sort of thing off as an innocent accident and oh well, too bad, so sad for the impact on the speech rights of the innocent.

There was nothing that remotely infringed La Sirena 69’s content. Everything about the complaints and takedown notices was wrong.

[…]

 

Source: Copyright Bot Can’t Tell The Difference Between Star Trek Ship And Adult Film Actress | Techdirt

Ubisoft blames ‘technical error’ for showing pop-up ads in Assassin’s Creed

Ubisoft is blaming a “technical error” for a fullscreen pop-up ad that appeared in Assassin’s Creed Odyssey this week. Reddit users say they spotted the pop-up on Xbox and PlayStation versions of the game, with an ad appearing just when you navigate to the map screen. “This is disgusting to experience while playing,” remarked one Reddit user, summarizing the general feeling against such pop-ups in the middle of gameplay.

“We have been made aware that some players encountered pop-up ads while playing certain Assassin’s Creed titles yesterday,” says Ubisoft spokesperson Fabien Darrigues, in a statement to The Verge. “This was the result of a technical error that we addressed as soon as we learned of the issue.”

The pop-up ad appeared during the middle of gameplay.
The pop-up ad appeared during the middle of gameplay.Image: triddell24 (Reddit)

While it was unclear at first why the game suddenly started showing Black Friday pop-up ads to promote Ubisoft’s latest versions of Assassin’s Creed, the publisher later explained what went wrong in a post on X (formerly Twitter). Ubisoft says it was trying to put an ad for Assassin’s Creed Mirage in the main menu of other Assassin’s Creed games. However, a “technical error” caused the promotion to show up on in-game menus instead. Ubisoft says the issue has since been fixed.

We recently saw Microsoft use fullscreen Xbox pop-up ads to promote its own games, and they’ve been annoying Xbox owners. Microsoft’s ads only appear when you boot an Xbox, and not everyone seems to be getting them. Microsoft and Ubisoft’s pop-ups are still very different to the ads we’re used to seeing on game consoles. We’ve seen games like Saints Row 2 with ads running on billboards, or plenty of in-game ads in EA Games titles in the mid-to-late 2000s.

Fullscreen pop-up ads in the middle of a game certainly aren’t common. Imagine a world full of games you’ve paid $70 for and then ads popping up in the middle of gameplay. I truly hope that Ubisoft’s “technical error” never becomes a game industry reality.

Source: Ubisoft blames ‘technical error’ for showing pop-up ads in Assassin’s Creed – The Verge

US government pays AT&T to let cops search phone records without warrant

A senator has alleged that American law enforcement agencies snoop on US citizens and residents, seemingly without regard for the privacy provisions of the Fourth Amendment, under a secret program called the Hemisphere Project that allows police to conduct searches of trillions of phone records.

According to Senator Ron Wyden (D-OR), these searches “usually” happen without warrants. And after more than a decade of keeping people — lawmakers included — in the dark about Hemisphere, Wyden wants the Justice Department to reveal information about what he called a “long-running dragnet surveillance program.”

“I have serious concerns about the legality of this surveillance program, and the materials provided by the DoJ contain troubling information that would justifiably outrage many Americans and other members of Congress,” Wyden wrote in a letter [PDF] to US Attorney General Merrick Garland.

Under Hemisphere, the White House Office of National Drug Control Policy (ONDCP) pays telco AT&T to provide all federal, state, local, and tribal law enforcement agencies with the ability to request searches of trillions of domestic phone records dating back to at least 1987, plus the four billion call records added every day.

[…]

Hemisphere first came to light in a 2013 New York Times report that alleged the “scale and longevity of the data storage appears to be unmatched by other government programs, including the NSA’s gathering of phone call logs under the Patriot Act.”

It’s not classified, but that doesn’t mean the Feds want you to see it

Privacy advocates including the Electronic Frontier Foundations have filed Freedom of Information Act and state-level public records lawsuits to learn more about the secret snooping program.

Few have made a dent: it appears that the Feds are doing everything they can to keep Hemisphere secret.

Although the program and its documents are not classified, the Justice Department has marked them as “Law Enforcement Sensitive,” meaning their disclosure could hurt ongoing investigations. This designation also prevents the documents from being publicly released.

Senator Wyden wants the designation removed.

Additionally, Hemisphere is not subject to a federal Privacy Impact Assessment due to its funding structure, it’s claimed. The White House doesn’t directly pay AT&T – instead the ONDCP provides a grant to the Houston High Intensity Drug Trafficking Area, which is a partnership between federal, state, and local law enforcement agencies. And this partnership, in turn, pays AT&T to operate this surveillance scheme.

[…]

Source: US government pays AT&T to let cops search phone records • The Register

Google admits it’s making YouTube worse for ad block and non-chrome (Edge, Firefox) users

[…]

Earlier this year, YouTube began interrupting videos for those using advert blockers with a pop-up encouraging them to either disable the offending extension or filter, or pay for YT’s ad-free premium tier.

More recently, netizens have reported experiencing delays in playback when using non-Chrome browsers as well.

Upon launching a video, Firefox users have reported a delay of roughly five seconds before playback would begin. In a statement to The Register, Google admitted it was intentionally making its content less binge-able for users unwilling to turn off offending extensions, though this wasn’t linked to any one browser.

“Ads are a vital lifeline for our creators that helps them run and grow their businesses,” a Google spokesperson explained. “In the past week, users using ad blockers may have experienced delays in loading, regardless of the browser they are using.”

To be clear, Google’s business model revolves around advertising, and ad blockers are specifically called out as being in violation of its terms of service. Google also makes Chrome, the widely-used browser that Mozilla’s Firefox and others try to compete against.

Unfortunately, the method used by Google to detect the presence of ad blockers and trigger the delay appears to be prone to false positives. Several netizens have reported experiencing delays when using Firefox or Microsoft’s Edge browser without an ad blocker installed.

[…]

The Register was unable to replicate this behavior in Firefox with or without an ad blocker enabled. This suggests Google could be experimenting to see just how far it can push users to convince them to turn off their ad blockers for good. In other words, not all netizens will or have experienced this delay.

YouTube said its ad block detection does not target any specific browsers, and that people who continue to use ad blockers may experience degraded or interrupted service as its detection efforts evolve.

[…]

Source: Google admits it’s making YouTube worse for ad block users • The Register

Also, the technology Google uses to detect your ad blocker basically amounts to spyware (Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware))

The Oura Ring Is a $300 Sleep Tracker Suddenly needs a Subscription

[…] Now in its third iteration, the Oura Ring tracks and analyzes a host of metrics, including your heart-rate variability (HRV), blood oxygen rate, body temperature, and sleep duration. It uses this data to give you three daily scores, tallying the quality of your sleep, activity, and “readiness.” It can also determine your chronotype (your body’s natural preferences for sleep or wakefulness), give insight into hormonal factors that can affect your sleep, and (theoretically) alert you when you’re getting sick.

I wore the Oura Ring for six months; it gave me tons of data about myself and helped me pinpoint areas in my sleep and health that I could improve. It’s also more comfortable and discreet to wear than most wristband wearable trackers.

However, the ring costs about $300 or more, depending on the style and finish, and Oura’s app now requires a roughly $72 yearly subscription to access most of the data and reports.

(Oura recently announced that the cost of the ring is eligible for reimbursement through a flexible spending account [FSA] or health spending account [HSA]. The subscription is not.)

If you just want to track your sleep cycles and get tips, a free (or modestly priced) sleep-tracking app may do the trick.

[…]

Source: The Oura Ring Is a $300 Sleep Tracker That Provides Tons of Data. But Is It Worth It? | Reviews by Wirecutter

So what do you get with the membership?

  • In-depth sleep analysis, every morning
  • Personalized health insights, 24/7
  • Live & accurate heart rate monitoring
  • Body temperature readings for early illness detection and period prediction (in beta)
  • Workout Heart Rate Tracking
  • Sp02 Monitoring
  • Rest Mode
  • Bedtime Guidance
  • Track More Movement
  • Restorative Time
  • Trends Over Time
  • Tags
  • Insights from Audio Sessions

And what if you want to continue for free?

Non-paying members have access to 3 simple daily scores: Sleep, Readiness, and Activity, as well as our interactive and educational Explore content.

Source: More power to you with Oura Membership.

This is a pretty stunning turn of events:

one because it was supposed to be the privacy friendly option, so what data are they sending to central servers and why (that’s the only way they can justify a subscription) and

two why is data that doesn’t need to be sent to the servers not being shown in the free version of the app?!

For the price of the ring this is a pretty shameless money grab.

The Epic Vs. Google Courtroom Battle Shows Google Routinely Hiding and Deleting Chats and Documents They Should (legally) Keep

[…] back in 2020 Epic added an option to Fortnite on mobile that let players buy Fortnite’s in-game V-Bucks currency directly from the company at a discount, bypassing both Apple’s and Google’s app store fees. This violated Apple and Google policies Epic agreed to and quickly led to both companies removing Fortnite from their respective mobile phone app stores. That triggered a lawsuit from Epic and led to a protracted 2021 legal fight against Apple over how Apple ran its app store, the monopoly it may have had, and the fees it charged app developers on in-app purchases. And now Epic is waging a similar legal battle against Google.

[…]

As reported by The Verge on November 6, the first day of the trial, Epic was allowed to tell the jury that Google may have destroyed or hidden relevant evidence. And throughout the first six-days of the trial, Epic’s lawyers have continued to bring up how few chatlogs Google provided during discovery and grilled Google execs over deleted chats and jokes about hiding conversations.

On November 7, Google Information Governance Lead Genaro Lopez was questioned multiple times about the seemingly missing chatlogs, and the company’s policy of telling employees to chat “off the record” about sensitive issues that could cause problems later down the line. Epic’s legal team also went after Google’s chat system, which includes a tool that lets its employees prevent chat history from being saved, and pointed out that Google employees were doing this even after a legal hold was put on the company following the Fortnite lawsuit. Asked if Google could have changed this policy and forced chats to be saved, Lopez agreed that it could have been altered, but wasn’t.

“You cannot guarantee that the documents that were destroyed will contradict the testimony we’re going to hear?” asked Epic’s lawyer. Lopez couldn’t make that guarantee.

On November 8, Google Play’s VP of Apps and Games Purnima Kochikar was also questioned about deleted chats and explained that the court won’t ever see her chat logs.

“During this case, you had your default setting to delete chats every 24 hours, correct?” Epic’s legal team asked.

“That was the default,” Kochikar said. She also confirmed she didn’t take any steps to change this setting.

An image shows characters from Fortnite in front of a yellow background.
Image: Epic Games

On November 9, some saved chat messages from Google’s head of platforms & ecosystems strategy for Android, Margaret Lam, showed her directly asking someone to turn off chat history due to “sensitivity with legal these days :)”.

Lam claimed in court that no Google attorney had briefed her on preserving chats during Epic’s legal hold. However, Epic’s lawyers weren’t done, and continued to show messages in which Lam asked people to turn off chat history. The Verge reports that one of these situations included a colleague pushing back and insisting that he was on a legal hold. In response, Lam messaged: “Ok maybe I take you off this convo :)”.

At another point, Lam messaged someone else: “also just realized our history is on 🙊 can we turn it off? Haha”.

Lam did push back, claiming that she went to legal for better advice after these conversations and now understands she failed to comply with the legal hold.

Then on November 13, James Kolotouros, VP of Android platform partnerships, admitted that he can’t remember a single instance when he might have turned on his chat history.

Google’s CEO wasn’t saving evidence, either

And today, during Google CEO Sundar Pichai’s time on the stand, Epic was able to get him to confirm that he also wasn’t saving his chats, letting messages auto-delete after 24 hours. Epic also showed evidence of Pichai asking for chat history to be turned off and then trying to delete that message, though the Google CEO claimed that was a glitch.

Not only that, Pichai confirmed that he has in the past marked documents with attorney/client privilege even when he was not seeking legal advice just so those emails didn’t get forwarded. Pichai told Epic’s lawyers that nobody told him that was wrong, though he now admits that he shouldn’t have done that.

Epic’s goal for all of this has been to show that Google might have been deleting chats or hiding evidence. That would help it make the case to the jury that the Android platform creator is trying to avoid creating a legal paper trail which could imply the company has something to hide from the court. That in turn makes Google seem less trustworthy and helps color all of its actions in a different light, something that could ultimately swing a jury one way or the other.

Regardless of if the jury cares about what has happened, the judge in the case very much seems to. Judge James Donato appears so fed up with the situation that on November 13, he demanded that Google’s chief legal officer show up in court by November 16 to explain what’s going on. If he doesn’t show or can’t give a good enough reason for why so much evidence was seemingly destroyed, the judge is considering instructing the jury to not trust Google as much as they might have before.

Needless to say, such a turn would not be good for Google’s fortunes in its continuing proceedings with Epic.

Source: The Epic Vs. Google Courtroom Battle Sounds Bonkers

The EU Commission’s Alleged CSAM Regulation ‘Experts’ giving them free reign to spy on everyone: can’t be found. OK then.

Everyone who wants client-side scanning to be a thing insists it’s a good idea with no potential downsides. The only hangup, they insist, is tech companies’ unwillingness to implement it. And by “implement,” I mean — in far too many cases — introducing deliberate (and exploitable!) weaknesses in end-to-end encryption.End-to-end encryption only works if both ends are encrypted. Taking the encryption off one side to engage in content scanning makes it half of what it was. And if you get in the business of scanning users’ content for supposed child sexual abuse material (CSAM), governments may start asking you to “scan” for other stuff… like infringing content, terrorist stuff, people talking about crimes, stuff that contradicts the government’s narratives, things political rivals are saying. The list goes on and on.Multiple experts have pointed out how the anti-CSAM efforts preferred by the EU would not only not work, but also subject millions of innocent people to the whims of malicious hackers and malicious governments. Governments also made these same points, finally forcing the EU Commission to back down on its attempt to undermine encryption, if not (practically) outlaw it entirely.The Commission has always claimed its anti-encryption, pro-client-side scanning stance is backed by sound advice given to it by the experts it has consulted. But when asked who was consulted, the EU Commission has refused to answer the question. This is from the Irish Council of Civil Liberties (ICCL), which asked the Commission a simple question, but — like the Superintendent Chalmers referenced in the headline — was summarily rejected. In response to a request for documents pertaining to the decision-making behind the proposed CSAM regulation, the European Commission failed to disclose a list of companies who were consulted about the technical feasibility of detecting CSAM without undermining encryption. This list

Everyone who wants client-side scanning to be a thing insists it’s a good idea with no potential downsides. The only hangup, they insist, is tech companies’ unwillingness to implement it. And by “implement,” I mean — in far too many cases — introducing deliberate (and exploitable!) weaknesses in end-to-end encryption.

End-to-end encryption only works if both ends are encrypted. Taking the encryption off one side to engage in content scanning makes it half of what it was. And if you get in the business of scanning users’ content for supposed child sexual abuse material (CSAM), governments may start asking you to “scan” for other stuff… like infringing content, terrorist stuff, people talking about crimes, stuff that contradicts the government’s narratives, things political rivals are saying. The list goes on and on.

Multiple experts have pointed out how the anti-CSAM efforts preferred by the EU would not only not work, but also subject millions of innocent people to the whims of malicious hackers and malicious governments. Governments also made these same points, finally forcing the EU Commission to back down on its attempt to undermine encryption, if not (practically) outlaw it entirely.

The Commission has always claimed its anti-encryption, pro-client-side scanning stance is backed by sound advice given to it by the experts it has consulted. But when asked who was consulted, the EU Commission has refused to answer the question. This is from the Irish Council of Civil Liberties (ICCL), which asked the Commission a simple question, but — like the Superintendent Chalmers referenced in the headline — was summarily rejected.

In response to a request for documents pertaining to the decision-making behind the proposed CSAM regulation, the European Commission failed to disclose a list of companies who were consulted about the technical feasibility of detecting CSAM without undermining encryption. This list “clearly fell within the scope” of the Irish Council for Civil Liberties’ request. 

If you’re not familiar with the reference, we’ll get you up to speed.

22 Short Films About Springfield is an episode of “The Simpsons” that originally aired in 1996. One particular “film” has become an internet meme legend: the one dealing with Principal Seymour Skinner’s attempt to impress his boss (Superintendent Chalmers) with a home-cooked meal.

One thing leads to another (and by one thing to another, I mean a fire in the kitchen as Skinner attempts to portray fast-food burgers as “steamed hams” and not the “steamed clams” promised earlier). That culminates in this spectacular cover-up by Principal Skinner when the superintendent asks about the extremely apparent fire occurring in the kitchen:

Principal Skinner: Oh well, that was wonderful. A good time was had by all. I’m pooped.

Chalmers: Yes. I should be– Good Lord! What is happening in there?

Principal Skinner: Aurora borealis.

Chalmers: Uh- Aurora borealis. At this time of year, at this time of day, in this part of the country, localized entirely within your kitchen?

Principal Skinner: Yes.

Chalmers [meekly]: May I see it?

Principal Skinner: No.

That is what happened here. Everyone opposing the EU Commission’s CSAM (i.e., “chat control”) efforts trotted out their experts, making it clearly apparent who was saying what and what their relevant expertise was. The EU insisted it had its own battery of experts. The ICCL said: “May we see them?”

The EU Commission: No.

Not good enough, said the ICCL. But that’s what a rights advocate would be expected to say. What’s less expected is the EU Commission’s ombudsman declaring the ICCL had the right to see this particularly specific aurora borealis.

After the Commission acknowledged to the EU Ombudsman that it, in fact, had such a list, but failed to disclose its existence to Dr Kris Shrishak, the Ombudsman held the Commission’s behaviour constituted “maladministration”.  

The Ombudsman held: “[t]he Commission did not identify the list of experts as falling within the scope of the complainant’s request. This means that the complainant did not have the opportunity to challenge (the reasons for) the institution’s refusal to disclose the document. This constitutes maladministration.” 

As the report further notes, the only existing documentation of this supposed consultation with experts has been reduced to a single self-serving document issued by the EU Commission. Any objections or interjections were added/subtracted as preferred by the EU Commission before presenting a “final” version that served its preferences. Any supporting documentation, including comments from participating stakeholders, were sent to the digital shredder.

As concerns the EUIF meetings, the Commission representatives explained that three online technical workshops took place in 2020. During the first workshop, academics, experts and companies were invited to share their perspectives on the matter as well as any documents that could be valuable for the discussion. After this workshop, a first draft of the ‘outcome document’ was produced, which summarises the input given orally by the participants and references a number of relevant documents. This first draft was shared with the participants via an online file sharing service and some participants provided written comments. Other participants commented orally on the first draft during the second workshop. Those contributions were then added to the final version of the ‘outcome document’ that was presented during the third and final workshop for the participants’ endorsement. This ‘outcome document’ is the only document that was produced in relation to the substance of these workshops. It was subsequently shared with the EUIF. One year later, it was used as supporting information to the impact assessment report.

In other words, the EU took what it liked and included it. The rest of it disappeared from the permanent record, supposedly because the EU Commission routinely purges any email communications more than two years old. This is obviously ridiculous in this context, considering this particular piece of legislation has been under discussion for far longer than that.

But, in the end, the EU Commission wins because it’s the larger bureaucracy. The ombudsman refused to issue a recommendation. Instead, it instructs the Commission to treat the ICCL’s request as “new” and perform another search for documents. “Swiftly.” Great, as far as that goes. But it doesn’t go far. The ombudsman also says it believes the EU Commission when it says only its version of the EUIF report survived the periodic document cull.

In the end, all that survives is this: the EU consulted with affected entities. It asked them to comment on the proposal. It folded those comments into its presentation. It likely presented only comments that supported its efforts. Dissenting opinions were auto-culled by EU Commission email protocols. It never sought further input, despite having passed the two-year mark without having converted the proposal into law. All that’s left, the ombudsman says, is likely a one-sided version of the Commission’s proposal. And if the ICCL doesn’t like it, well… it will have to find some other way to argue with the “experts” the Commission either ignored or auto-deleted. The government wins, even without winning arguments. Go figure.

Source: Steamed Hams, Except It’s The EU Commission’s Alleged CSAM Regulation ‘Experts’ | Techdirt

Decoupling for IT Security (=privacy)

Whether we like it or not, we all use the cloud to communicate and to store and process our data. We use dozens of cloud services, sometimes indirectly and unwittingly. We do so because the cloud brings real benefits to individuals and organizations alike. We can access our data across multiple devices, communicate with anyone from anywhere, and command a remote data center’s worth of power from a handheld device.

But using the cloud means our security and privacy now depend on cloud providers. Remember: the cloud is just another way of saying “someone else’s computer.” Cloud providers are single points of failure and prime targets for hackers to scoop up everything from proprietary corporate communications to our personal photo albums and financial documents.

The risks we face from the cloud today are not an accident. For Google to show you your work emails, it has to store many copies across many servers. Even if they’re stored in encrypted form, Google must decrypt them to display your inbox on a webpage. When Zoom coordinates a call, its servers receive and then retransmit the video and audio of all the participants, learning who’s talking and what’s said. For Apple to analyze and share your photo album, it must be able to access your photos.

Hacks of cloud services happen so often that it’s hard to keep up. Breaches can be so large as to affect nearly every person in the country, as in the Equifax breach of 2017, or a large fraction of the Fortune 500 and the U.S. government, as in the SolarWinds breach of 2019-20.

It’s not just attackers we have to worry about. Some companies use their access—benefiting from weak laws, complex software, and lax oversight—to mine and sell our data.

[…]

The less someone knows, the less they can put you and your data at risk. In security this is called Least Privilege. The decoupling principle applies that idea to cloud services by making sure systems know as little as possible while doing their jobs. It states that we gain security and privacy by separating private data that today is unnecessarily concentrated.

To unpack that a bit, consider the three primary modes for working with our data as we use cloud services: data in motion, data at rest, and data in use. We should decouple them all.

Our data is in motion as we exchange traffic with cloud services such as videoconferencing servers, remote file-storage systems, and other content-delivery networks. Our data at rest, while sometimes on individual devices, is usually stored or backed up in the cloud, governed by cloud provider services and policies. And many services use the cloud to do extensive processing on our data, sometimes without our consent or knowledge. Most services involve more than one of these modes.

[…]

Cryptographer David Chaum first applied the decoupling approach in security protocols for anonymity and digital cash in the 1980s, long before the advent of online banking or cryptocurrencies. Chaum asked: how can a bank or a network service provider provide a service to its users without spying on them while doing so?

Chaum’s ideas included sending Internet traffic through multiple servers run by different organizations and divvying up the data so that a breach of any one node reveals minimal information about users or usage. Although these ideas have been influential, they have found only niche uses, such as in the popular Tor browser.

Trust, but Don’t Identify

The decoupling principle can protect the privacy of data in motion, such as financial transactions and Web browsing patterns that currently are wide open to vendors, banks, websites, and Internet Service Providers (ISPs).

Illustration of a process.

STORYTK

1. Barath orders Bruce’s audiobook from Audible. 2. His bank does not know what he is buying, but it guarantees the payment. 3. A third party decrypts the order details but does not know who placed the order. 4. Audible delivers the audiobook and receives the payment.

DECOUPLED E-COMMERCE: By inserting an independent verifier between the bank and the seller and by blinding the buyer’s identity from the verifier, the seller and the verifier cannot identify the buyer, and the bank cannot identify the product purchased. But all parties can trust that the signed payment is valid.

Illustration of a process

STORYTK

1. Bruce’s browser sends a doubly encrypted request for the IP address of sigcomm.org. 2. A third-party proxy server decrypts one layer and passes on the request, replacing Bruce’s identity with an anonymous ID. 3. An Oblivious DNS server decrypts the request, looks up the IP address, and sends it back in an encrypted reply. 4. The proxy server forwards the encrypted reply to Bruce’s browser. 5. Bruce’s browser decrypts the response to obtain the IP address of sigcomm.org.

DECOUPLED WEB BROWSING: ISPs can track which websites their users visit because requests to the Domain Name System (DNS), which converts domain names to IP addresses, are unencrypted. A new protocol called Oblivious DNS can protect users’ browsing requests from third parties. Each name-resolution request is encrypted twice and then sent to an intermediary (a “proxy”) that strips out the user’s IP address and decrypts the outer layer before passing the request to a domain name server, which then decrypts the actual request. Neither the ISP nor any other computer along the way can see what name is being queried. The Oblivious resolver has the key needed to decrypt the request but no information about who placed it. The resolver encrypts its reply so that only the user can read it.

Similar methods have been extended beyond DNS to multiparty-relay protocols that protect the privacy of all Web browsing through free services such as Tor and subscription services such as INVISV Relay and Apple’s iCloud Private Relay.

[…]

Meetings that were once held in a private conference room are now happening in the cloud, and third parties like Zoom see it all: who, what, when, where. There’s no reason a videoconferencing company has to learn such sensitive information about every organization it provides services to. But that’s the way it works today, and we’ve all become used to it.

There are multiple threats to the security of that Zoom call. A Zoom employee could go rogue and snoop on calls. Zoom could spy on calls of other companies or harvest and sell user data to data brokers. It could use your personal data to train its AI models. And even if Zoom and all its employees are completely trustworthy, the risk of Zoom getting breached is omnipresent. Whatever Zoom can do with your data in motion, a hacker can do to that same data in a breach. Decoupling data in motion could address those threats.

[…]

Most storage and database providers started encrypting data on disk years ago, but that’s not enough to ensure security. In most cases, the data is decrypted every time it is read from disk. A hacker or malicious insider silently snooping at the cloud provider could thus intercept your data despite it having been encrypted.

Cloud-storage companies have at various times harvested user data for AI training or to sell targeted ads. Some hoard it and offer paid access back to us or just sell it wholesale to data brokers. Even the best corporate stewards of our data are getting into the advertising game, and the decade-old feudal model of security—where a single company provides users with hardware, software, and a variety of local and cloud services—is breaking down.

Decoupling can help us retain the benefits of cloud storage while keeping our data secure. As with data in motion, the risks begin with access the provider has to raw data (or that hackers gain in a breach). End-to-end encryption, with the end user holding the keys, ensures that the cloud provider can’t independently decrypt data from disk.

[…]

Modern protocols for decoupled data storage, like Tim Berners-Lee’s Solid, provide this sort of security. Solid is a protocol for distributed personal data stores, called pods. By giving users control over both where their pod is located and who has access to the data within it—at a fine-grained level—Solid ensures that data is under user control even if the hosting provider or app developer goes rogue or has a breach. In this model, users and organizations can manage their own risk as they see fit, sharing only the data necessary for each particular use.

[…]

the last few years have seen the advent of general-purpose, hardware-enabled secure computation. This is powered by special functionality on processors known as trusted execution environments (TEEs) or secure enclaves. TEEs decouple who runs the chip (a cloud provider, such as Microsoft Azure) from who secures the chip (a processor vendor, such as Intel) and from who controls the data being used in the computation (the customer or user). A TEE can keep the cloud provider from seeing what is being computed. The results of a computation are sent via a secure tunnel out of the enclave or encrypted and stored. A TEE can also generate a signed attestation that it actually ran the code that the customer wanted to run.

With TEEs in the cloud, the final piece of the decoupling puzzle drops into place. An organization can keep and share its data securely at rest, move it securely in motion, and decrypt and analyze it in a TEE such that the cloud provider doesn’t have access. Once the computation is done, the results can be reencrypted and shipped off to storage. CPU-based TEEs are now widely available among cloud providers, and soon GPU-based TEEs—useful for AI applications—will be common as well.

[…]

Decoupling also allows us to look at security more holistically. For example, we can dispense with the distinction between security and privacy. Historically, privacy meant freedom from observation, usually for an individual person. Security, on the other hand, was about keeping an organization’s data safe and preventing an adversary from doing bad things to its resources or infrastructure.

There are still rare instances where security and privacy differ, but organizations and individuals are now using the same cloud services and facing similar threats. Security and privacy have converged, and we can usefully think about them together as we apply decoupling.

[…]

Decoupling isn’t a panacea. There will always be new, clever side-channel attacks. And most decoupling solutions assume a degree of noncollusion between independent companies or organizations. But that noncollusion is already an implicit assumption today: we trust that Google and Advanced Micro Devices will not conspire to break the security of the TEEs they deploy, for example, because the reputational harm from being found out would hurt their businesses. The primary risk, real but also often overstated, is if a government secretly compels companies to introduce backdoors into their systems. In an age of international cloud services, this would be hard to conceal and would cause irreparable harm.

[…]

Imagine that individuals and organizations held their credit data in cloud-hosted repositories that enable fine-grained encryption and access control. Applying for a loan could then take advantage of all three modes of decoupling. First, the user could employ Solid or a similar technology to grant access to Equifax and a bank only for the specific loan application. Second, the communications to and from secure enclaves in the cloud could be decoupled and secured to conceal who is requesting the credit analysis and the identity of the loan applicant. Third, computations by a credit-analysis algorithm could run in a TEE. The user could use an external auditor to confirm that only that specific algorithm was run. The credit-scoring algorithm might be proprietary, and that’s fine: in this approach, Equifax doesn’t need to reveal it to the user, just as the user doesn’t need to give Equifax access to unencrypted data outside of a TEE.

Building this is easier said than done, of course. But it’s practical today, using widely available technologies. The barriers are more economic than technical.

[…]

One of the challenges of trying to regulate tech is that industry incumbents push for tech-only approaches that simply whitewash bad practices. For example, when Facebook rolls out “privacy-enhancing” advertising, but still collects every move you make, has control of all the data you put on its platform, and is embedded in nearly every website you visit, that privacy technology does little to protect you. We need to think beyond minor, superficial fixes.

Decoupling might seem strange at first, but it’s built on familiar ideas. Computing’s main tricks are abstraction and indirection. Abstraction involves hiding the messy details of something inside a nice clean package: when you use Gmail, you don’t have to think about the hundreds of thousands of Google servers that have stored or processed your data. Indirection involves creating a new intermediary between two existing things, such as when Uber wedged its app between passengers and drivers.

The cloud as we know it today is born of three decades of increasing abstraction and indirection. Communications, storage, and compute infrastructure for a typical company were once run on a server in a closet. Next, companies no longer had to maintain a server closet, but could rent a spot in a dedicated colocation facility. After that, colocation facilities decided to rent out their own servers to companies. Then, with virtualization software, companies could get the illusion of having a server while actually just running a virtual machine on a server they rented somewhere. Finally, with serverless computing and most types of software as a service, we no longer know or care where or how software runs in the cloud, just that it does what we need it to do.

[…]

We’re now at a turning point where we can add further abstraction and indirection to improve security, turning the tables on the cloud providers and taking back control as organizations and individuals while still benefiting from what they do.

The needed protocols and infrastructure exist, and there are services that can do all of this already, without sacrificing the performance, quality, and usability of conventional cloud services.

But we cannot just rely on industry to take care of this. Self-regulation is a time-honored stall tactic: a piecemeal or superficial tech-only approach would likely undermine the will of the public and regulators to take action. We need a belt-and-suspenders strategy, with government policy that mandates decoupling-based best practices, a tech sector that implements this architecture, and public awareness of both the need for and the benefits of this better way forward.

Source: Essays: Decoupling for Security – Schneier on Security

Google Sues Men Who Weaponized DMCA Notices to Crush Competition

Two men who allegedly used 65 Google accounts to bombard Google with fraudulent DMCA takedown notices targeting up to 620,000 URLs, have been named in a Google lawsuit filed in California on Monday. Google says the men weaponized copyright law’s notice-and-takedown system to sabotage competitors’ trade, while damaging the search engine’s business and those of its customers.

dmca-google-s1While all non-compliant DMCA takedown notices are invalid by default, there’s a huge difference between those sent in error and others crafted for purely malicious purposes.

Bogus DMCA takedown notices are nothing new, but the rise of organized groups using malicious DMCA notices as a business tool has been apparent in recent years.

Since the vast majority of culprits facing zero consequences, that may have acted as motivation to send more. Through a lawsuit filed at a California court on Monday, Google appears to be sending the message that enough is enough.

Defendants Weaponized DMCA Takedowns

Google’s complaint targets Nguyen Van Duc and Pham Van Thien, both said to be residents of Vietnam and the leaders of up to 20 Doe defendants. Google says the defendants systematically abused accounts “to submit a barrage” of fraudulent copyright takedown requests aimed at removing their competitors’ website URLs from Google Search results.

[…]

The misrepresentations in notices sent to Google were potentially damaging to other parties too. Under fake names, the defendants falsely claimed to represent large companies such as Amazon, Twitter, and NBC News, plus sports teams including the Philadelphia Eagles, Los Angeles Lakers, San Diego Padres.

In similarly false notices, they claimed to represent famous individuals including Elon Musk, Taylor Swift, LeVar Burton, and Kanye West.

The complaint notes that some notices were submitted under company names that do not exist in the United States, at addresses where innocent families and businesses can be found. Google says that despite these claims, the defendants can be found in Vietnam from where they proudly advertise their ‘SEO’ scheme to others, including via YouTube.

[…]

Source: Google Sues Men Who Weaponized DMCA Notices to Crush Competition * TorrentFreak

Who would have thought that such a super poorly designed piece of copyright law would be used for this? Probably almost everyone who has been hit by a DMCA with no recourse is all. This is but a tiny tiny fraction of the iceberg, with the actual copyright holders at the top. The only way to stop this is by taking down the whole DMCA system.

New Israeli Law Makes Consuming ‘Terrorist’ Content A Criminal Offense

It’s amazing just how much war and conflict can change a country. On October 7th, Hamas blitzed Israel with an attack that was plainly barbaric. Yes, this is a conflict that has been simmering with occasional flashpoints for decades. No, neither side can even begin to claim it has entirely clean hands as a result of those decades of conflict. We can get the equivocating out of the way. October 7th was different, the worst single day of murder of the Jewish community since the Holocaust. And even in the immediate aftermath, those outside of Israel and those within knew that the attack was going to result in both an immediate reaction from Israel and longstanding changes within its borders. And those of us from America, or those that witnessed how our country reacted to 9/11, knew precisely how much danger this period of change represented.

It’s already started. First, Israel loosened the reigns to allow once-blacklisted spyware companies to use their tools to help Israel find the hundreds of hostages Hamas claims to have taken. While that goal is perfectly noble, of course, the willingness to engage with more nefarious tools to achieve that end had begun. And now we learn that Israel’s government has taken the next step in amending its counterterrorism laws to make the consumption of “terrorist” content a criminal offense, punishable with jail time.

The bill, which was approved by a 13-4 majority in the Knesset, is a temporary two-year measure that amends Article 24 of the counterterrorism law to ban the “systematic and continuous consumption of publications of a terrorist organization under circumstances that indicate identification with the terrorist organization”.

It identifies the Palestinian group Hamas and the ISIL (ISIS) group as the “terrorist” organisations to which the offence applies. It grants the justice minister the authority to add more organisations to the list, in agreement with the Ministry of Defence and with the approval of the Knesset’s Constitution, Law, and Justice Committee.

Make no mistake, this is the institution of thought crime. Read those two paragraphs one more time and realize just how much the criminalization of consumption of materials relies on the judgement and interpretation of those enforcing it. What is systematic in terms of this law? What is a publication? What constitutes a “terrorist organization,” not in the case of Hamas and ISIL, but in that ominous bit at the end of the second paragraph, where more organizations can — and will — be added to this list?

And most importantly, how in the world is the Israeli government going to determine “circumstances that indicate identification with the terrorist organization?”

“This law is one of the most intrusive and draconian legislative measures ever passed by the Israeli Knesset since it makes thoughts subject to criminal punishment,” said Adalah, the Legal Centre for Arab Minority Rights in Israel. It warned that the amendment would criminalise “even passive social media use” amid a climate of surveillance and curtailment of free speech targeting Palestinian citizens of Israel.

“This legislation encroaches upon the sacred realm of an individual’s personal thoughts and beliefs and significantly amplifies state surveillance of social media use,” the statement added. Adalah is sending a petition to the Supreme Court to challenge the bill.

This has all the hallmarks of America’s overreaction to the 9/11 attacks. We still haven’t unwound, not even close, all of the harm that was done in the aftermath of those attacks, all in the name of safety. We are still at a net-negative value in terms of our civil liberties due to that overreaction. President Biden even reportedly warned Israel not to ignore our own mistakes, but they’re doing it anyway.

And circling back to the first quotation and the claim that this law is temporary over a 2 year period, that’s just not how this works. If this law is allowed to continue to exist, it will be extended, and then extended again. The United States is still operating under the Authorization for Use of Military Force of 2001 and used it in order to conduct strikes in Somalia under the Biden administration, two decades later.

The right to speech and thought is as bedrock a thing as exists for a democracy. If we accept that premise, then it is simply impossible to “protect a democracy” by limiting the rights of speech and thought. And that’s precisely what this new law in Israel does: it chips away at the democracy of the state in order to protect it.

That’s not how Israel wins this war, if that is in fact the goal.

Source: New Israeli Law Makes Consuming ‘Terrorist’ Content A Criminal Offense | Techdirt

European digital identity: Council and Parliament reach a provisional agreement on eID

[…]

Under the new law, member states will offer citizens and businesses digital wallets that will be able to link their national digital identities with proof of other personal attributes (e.g., driving licence, diplomas, bank account). Citizens will be able to prove their identity and share electronic documents from their digital wallets with a click of a button on their mobile phone.

The new European digital identity wallets will enable all Europeans to access online services with their national digital identification, which will be recognised throughout Europe, without having to use private identification methods or unnecessarily sharing personal data. User control ensures that only information that needs to be shared will be shared.

Concluding the initial provisional agreement

Since the initial provisional agreement on some of the main elements of the legislative proposal at the end of June this year, a thorough series of technical meetings followed in order to complete a text that allowed the finalisation of the file in full. Some relevant aspects agreed by the co-legislators today are:

  • the e-signatures: the wallet will be free to use for natural persons by default, but member states may provide for measures to ensure that the free-of-charge use is limited to non-professional purposes
  • the wallet’s business model: the issuance, use and revocation will be free of charge for all natural persons
  • the validation of electronic attestation of attributes: member states shall provide free-of-charge validation mechanisms only to verify the authenticity and validity of the wallet and of the relying parties’ identity
  • the code for the wallets: the application software components will be open source, but member states are granted necessary leeway so that, for justified reasons, specific components other than those installed on user devices may not be disclosed
  • consistency between the wallet as an eID means and the underpinning scheme under which it is issued has been ensured

Finally, the revised law clarifies the scope of the qualified web authentication certificates (QWACs), which ensures that users can verify who is behind a website, while preserving the current well-established industry security rules and standards.

Next steps

Technical work will continue to complete the legal text in accordance with the provisional agreement. When finalised, the text will be submitted to the member states’ representatives (Coreper) for endorsement. Subject to a legal/linguistic review, the revised regulation will then need to be formally adopted by the Parliament and the Council before it can be published in the EU’s Official Journal and enter into force.

[…]

Source: European digital identity: Council and Parliament reach a provisional agreement on eID – Consilium

What does that free vs ad supported Facebook / Instagram warning mean, why is it there?

facebook ads choice

In the EU, Meta has given you a warning saying that you need to choose for an expensive ad free version or continue using targetted adverts. Strangely, considering Meta makes it’s profits by selling your information, you don’t get the option to be paid a cut of the profits they gain by selling your information. Even more strangely, not many people are covering it. Below is a pretty good writeup of the situation, but what is not clear is whether by agreeing to the free version, things continue as they are, or are you signing up for additional invasions into your privacy, such as sending your information to servers into the USA.

Even though it’s a seriously and strangely underreported phenomenon, people are leaving Meta for fear (justly or unjustly) of further intrusions into their privacy by the slurping behemoth.

Why is Meta launching an ad-free plan for Instagram and Facebook?

After receiving major backlash from the European Union in January 2023, resulting in a €377 million fine for the tech giant, Meta has since adapted their applications to suit EU regulations. These major adaptions have all led to the recent launch of their ad-free subscription service.

This most recent announcement comes to keep in line with the European Union’s Digital Marketers Act legislation. The legislation requires companies to give users the option to give consent before being tracked for advertising reasons, something Meta previously wasn’t doing.

As a way of complying with this rule while also sustaining its ad-supported business model, Meta is now releasing an ad-free subscription service for users who don’t want targeted ads showing up on their Instagram and Facebook feeds while also putting some more cash in the company’s pocket.

How much will the ad-free plan cost on Instagram and Facebook?

facebook-on-laptop
Austin Distel on Unsplash

The price depends on where you purchase the subscription. If you purchase the ad-free plan from Meta for your desktop, then the plan will cost €9.99/month. If you purchase on your Android or IOS device, the plan will cost €12.99/month. Presumably, this is because Apple and Google charge fees, and Meta is passing those fees along to the user instead of taking a hit on its profit.

If I buy the plan on desktop, will the subscription carry over to my phone?

Yes! It’s confusing at first, but no matter where you sign up for your subscription, it will automatically link to all your meta accounts, allowing you to view ad-free content on every device. Essentially, if you have access to a desktop and are interested in signing up for the ad-free plan, you’re better off signing up there, as you’ll save some money.

When will the ad-free plan be available to Instagram and Facebook users?

The subscription will be available for users in November 2023. Meta didn’t announce a specific date.

“In November, we will be offering people who use Facebook or Instagram and reside in these regions the choice to continue using these personalised services for free with ads, or subscribe to stop seeing ads.”

Can I still use Instagram and Facebook without subscribing to Meta’s ad-free plan?

Meta’s statement said that it believes “in an ad-supported internet, which gives people access to personalized products and services regardless of their economic status.” Staying true to its beliefs, Meta will still allow users to use its services for free with ads.

The Onyx Boox Tab Mini C running the Instagram app.

However, it’s important to note that Meta mentioned in its statement, “Beginning March 1, 2024, an additional fee of €6/month on the web and €8/month on iOS and Android will apply for each additional account listed in a user’s Account Center.” So, for now, the subscription will cover accounts on all platforms, but the cost will rise in the future for users with more than one account

Which countries will get the new. ad-free subscription option?

The below countries can access Meta’s new subscription:

Austria, Belgium, Bulgaria, Croatia, Republic of Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lichtenstein, Lithuania, Luxembourg, Malta, Norway, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Switzerland and Sweden.

Will Meta launch this ad-free plan outside the EU and Switzerland?

It’s unknown at the moment whether Meta plans to expand this service into any other regions. Currently, the only regions able to subscribe to an ad-free plan are those listed above, but if it’s successful in those countries, it’s possible that Meta could roll it out in other regions.

What’s the difference between Meta Verified and this ad-free plan?

Launched in early 2023, Meta Verified allows Facebook and Instagram users to pay for a blue tick mark next to their name. Yes, the same tick mark most celebrities with major followings typically have. This subscription service was launched as a way for users to protect their accounts and promote their businesses. Meta Verified costs $14.99/month (€14/month). It gives users the blue tick mark and provides extra account support and protection from impersonators.

How to apply to be verified on Instagram image 1
Unsplash/Pocket-lint

While Meta Verified offers several unique account privacy features for users, it doesn’t offer an ad-free subscription. Currently, those subscribed to Meta Verified must also pay for an ad-free account if they live in one of the supported countries.

How can I sign up for Meta’s ad-free plan for Instagram and Facebook?

Users can sign up for the ad-free subscription via their Facebook or Instagram accounts. Here’s what you need to sign up:

  1. Go to account settings on Facebook or Instagram.
  2. Click subscribe on the ad-free plan under the subscriptions tab (once it’s available).

If I choose not to subscribe, will I receive more ads than I do now?

Meta says that nothing will change about your current account if you choose to keep your account as is, meaning you don’t subscribe to the ad-free plan. In other words, you’ll see exactly the same amount of ads you’ve always seen.

How will this affect other social media platforms?

Paid subscriptions seem to be the trend among many social media platforms in the past couple of years. Snapchat hopped onto the trend early in the Summer of 2022 when they released Snapchat+, which allows premium users to pay $4/month to see where they rank on their friends’ best friends list, boost their stories, pin friends as their top best friends, and further customize their settings.

More notably, Twitter, famously bought by Elon Musk, who now rebranded the platform to “X,” released three different tiers of subscriptions meant to improve a user’s experience. The tiers include Basic, Premium, and Premium Plus. X’s latest release, the Premium+ tier, allows users to pay $16/month for an ad-free experience and the ability to edit or undo their posts.

TikTok 1
Pocket-lint

Other major apps, such as TikTok, have yet to announce any ad-free subscription plans, although it wouldn’t be shocking if they followed suit.

For Meta’s part, it claims to want its websites to remain a free ad-based revenue domain, but we’ll see how long that lasts, especially if its first two subscription offerings succeed.

This is the spin Facebook itself gives on the story: Facebook and Instagram to Offer Subscription for No Ads in Europe

What else is noteworthy, is that this comes as Youtube is installing spyware onto your computer to figure out if you are running an adblocker – also something not receiving enough attention.

See also: Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware)

and YouTube cares less for your privacy than its revenues

Time to switch to alternatives!

9th Circuit Advances Lawsuit Over Fortnite ‘Emotes;’ Says Dance Moves Are As Protected As Songs

[…]

Many courts have already dealt with these lawsuits-come-lately filed by opportunistic people who failed to capitalize on their own pop culture cache but thought it was worth throwing a few hundred dollars in filing fees towards a federal court in hopes that the eventual payoff would be millions.

Most of these efforts have failed. Dance moves are tough to copyright, considering they’re often not a cohesive form of expression. On top of that, there’s a whole lot of independent invention because the human body is only capable of so many dance moves that portray talent, rather than just an inability to control your limbs.

Hence the federal court’s general hesitance to proclaim controlled flailing protectable. And hence the failure of most these Fortnite-is-worth-millions lawsuits written by people with dollar signs for eyes and Web 2.0 ambulance chasers for lawyers.

But one of these lawsuits has been revived by the Ninth Circuit, which has decided a certain number of sequential dance steps is actual intellectual property worth suing over. Here’s Wes Davis with more details for The Verge:

This week, a panel of US appeals court judges has renewed the legal battle over Fortnite dance moves by reversing the dismissal of a lawsuit filed last year by professional choreographer Kyle Hanagami against Epic Games.

[…]

The lower court said choreographic works are made up of poses that aren’t protectable alone.It found that the steps and poses of dance choreography used by characters in Fortnite were not “substantially similar, other than the four identical counts of poses” because they don’t “share any creative elements” with Hanagami’s work.

The 9th Circuit panel agreed with the lower court that “choreography is composed of various elements that are unprotectable when viewed in isolation.” However, Judge Richard Paez wrote this week that referring to portions of choreography as “poses” was like calling music “just ‘notes.’” They also found that choreography can involve other elements like timing, use of space, and even the energy of the performance.

This is a strange conclusion to reach given prior case law on the subject. But a lot of prior Fortnite case law is based on the fact that complainants never made any attempt to copyright their moves, but rather decided they were owed a living by Fortnite’s producer (Epic Games) simply because Fortnite (and Epic Games) were extremely successful.

That’s not the case here, as the Ninth Circuit [PDF] notes:

Plaintiff Kyle Hanagami (“Hanagami”) is a celebrity choreographer who owns a validly registered copyright in a five-minute choreographic work.

That’s a point in Hanagami’s favor. Whether or not this particular expression is protected under copyright law is no longer an open question. It has been registered with the US Copyright office, thus making it possible for Hanagami to seek a payout that far exceeds actual damages that can be proven in court.

As was noted above, the lower court compared Hanagami’s registered work with the allegedly infringing “emote” and found that, at best, only small parts had been copied.

The Ninth Circuit disagrees.

The district court erred by ruling that, as a matter of law, the Steps are unprotectable because they are relatively brief. Hanagami has more than plausibly alleged that the four-count portion has substantial qualitative significance to the overall Registered Choreography. The four counts in question are repeated eight times throughout the Registered Choreography, corresponding to the chorus and titular lyrics of the accompanying song. Hanagami alleges that the segment is the most recognizable and distinctive portion of his work, similar to the chorus of a song. Whether or not a jury would ultimately find the copied portion to be qualitatively significant is a question for another day. We conclude only that the district court erred in dismissing Hanagami’s copyright claim on the basis that Epic allegedly infringed only a relatively small amount of the Registered Choreography.

This allows the lawsuit to move forward. The Ninth Circuit does not establish a bright line ruling that would encourage/deter similar lawsuits. Nor does it establish a baseline to guide future rulings. Instead, it simply says some choreography is distinctive enough plaintiffs can sue over alleged infringement, but most likely, it will be a jury deciding these facts, rather than a judge handling motions to dismiss.

So… maybe that’s ok? I can understand the point that distinctive progressive dance steps are as significant as distinctive chord progressions when it comes to expression that can be copyrighted. But, on the other hand, the lack of guidance from the appellate level encourages speculative litigation because it refuses to make a call one way or the other but simply decides the lower court is (1) wrong and (2) should handle all the tough questions itself.

Where this ends up is tough to say. But, for now, it guarantees someone who rues every “emote” purchase made for my persistent offspring will only become more “get off my lawn” as this litigation progresses.

Source: 9th Circuit Advances Lawsuit Over Fortnite ‘Emotes;’ Says Dance Moves Are As Protected As Songs | Techdirt

Korean Financial Regulator Chief: About 100 Stocks Targeted in Naked Short Selling, Indicating Pervasive Illegality

In response to criticism suggesting that the ban on short selling implemented on Nov. 6 is a “political decision” aimed at next year’s general election, Lee Bok-hyun, the head of the Financial Supervisory Service (FSS), directly refuted the claims, stating, “About 100 stocks were identified as targets for naked short selling.” He said that it was a decisive measure to uproot rampant illegal short selling in the stock market.
[…]
“Currently, around 100 stocks, regardless of whether they are listed on the KOSPI or KOSDAQ, have been identified as subjects of naked, or illegal, short selling, and additional investigations are ongoing.”
[…]
He described the current situation regarding short selling as, “Not just a street with many broken windows, but rather a market where illegality has become so widespread that all the windows are shattered.”
[…]

Source: Financial Regulator Chief: About 100 Stocks Targeted in Naked Short Selling, Indicating Pervasive Illegality – Businesskorea

Naked shorting is the illegal practice of short-selling shares that have not been affirmatively determined to exist. Ordinarily, traders must borrow a stock or determine that it can be borrowed before they sell it short. So naked shorting refers to short pressure on a stock that may be larger than the tradable shares in the market.

Despite being made illegal after the 2008–09 financial crisis, naked shorting continues to happen because of loopholes in rules and discrepancies between paper and electronic trading systems.

Source: What Is Naked Short Selling, How Does It Work, and Is It Legal?

This and dark pool trading well all exposed by the GameStop / #GME explosion a few years ago. It’s nice to see someone finally taking it seriously, even if it is Korea and not the USA.

Data broker’s staggering sale of sensitive info exposed in unsealed FTC filing

[…]

The FTC has accused Kochava of violating the FTC Act by amassing and disclosing “a staggering amount of sensitive and identifying information about consumers,” alleging that Kochava’s database includes products seemingly capable of identifying nearly every person in the United States.

According to the FTC, Kochava’s customers, ostensibly advertisers, can access this data to trace individuals’ movements—including to sensitive locations like hospitals, temporary shelters, and places of worship, with a promised accuracy within “a few meters”—over a day, a week, a month, or a year. Kochava’s products can also provide a “360-degree perspective” on individuals, unveiling personally identifying information like their names, home addresses, phone numbers, as well as sensitive information like their race, gender, ethnicity, annual income, political affiliations, or religion, the FTC alleged.

Beyond that, the FTC alleged that Kochava also makes it easy for advertisers to target customers by categories that are “often based on specific sensitive and personal characteristics or attributes identified from its massive collection of data about individual consumers.” These “audience segments” allegedly allow advertisers to conduct invasive targeting by grouping people not just by common data points like age or gender, but by “places they have visited,” political associations, or even their current circumstances, like whether they’re expectant parents. Or advertisers can allegedly combine data points to target highly specific audience segments like “all the pregnant Muslim women in Kochava’s database,” the FTC alleged, or “parents with different ages of children.”

[…]

According to the FTC, Kochava obtains data “from a myriad of sources, including from mobile apps and other data brokers,” which together allegedly connects a web of data that “contains information about consumers’ usage of over 275,000 mobile apps.”

The FTC alleged that this usage data is also invasive, allowing Kochava customers to track not just what apps a customer uses, but how long they’ve used the apps, what they do in the apps, and how much money they spent in the apps, the FTC alleged.

[…]

Kochava “actively promotes its data as a means to evade consumers’ privacy choices,” the FTC alleged. Further, the FTC alleged that there are no real ways for consumers to opt out of Kochava’s data marketplace, because even resetting their mobile advertising IDs—the data point that’s allegedly most commonly used to identify users in its database—won’t stop Kochava customers from using its products to determine “other points to connect to and securely solve for identity.”

[…]

Kochava hoped the court would impose sanctions on the FTC because Kochava argued that many of the FTC’s allegations were “knowingly false.” But Winmill wrote that the bar for imposing sanctions is high, requiring that Kochava show that the FTC’s complaint was not just implausibly pled, but “clearly frivolous,” raised “without legal foundation,” or “brought for an improper purpose.”

In the end, Winmill denied the request for sanctions, partly because the court could not identify a “single” allegation in the FTC complaint flagged by Kochava as false that actually appeared “false or misleading,” the judge wrote.

Instead, it seemed like Kochava was attempting to mislead the court.

[…]

“The Court concludes that the FTC’s legal and factual allegations are not frivolous,” Winmill wrote, dismissing Kochava’s motion for sanctions. The judge concluded that Kochava’s claims that the FTC intended to harass and generate negative publicity about the data broker were ultimately “long on hyperbole and short on facts.”

Source: Data broker’s “staggering” sale of sensitive info exposed in unsealed FTC filing | Ars Technica

US Court rules automakers can record and save owner text messages and call logs

A federal judge on Tuesday refused to bring back a class action lawsuit alleging four auto manufacturers had violated Washington state’s privacy laws by using vehicles’ on-board infotainment systems to record and intercept customers’ private text messages and mobile phone call logs.

The Seattle-based appellate judge ruled that the practice does not meet the threshold for an illegal privacy violation under state law, handing a big win to automakers Honda, Toyota, Volkswagen and General Motors, which are defendants in five related class action suits focused on the issue. One of those cases, against Ford, had been dismissed on appeal previously.

The plaintiffs in the four live cases had appealed a prior judge’s dismissal. But the appellate judge ruled Tuesday that the interception and recording of mobile phone activity did not meet the Washington Privacy Act’s standard that a plaintiff must prove that “his or her business, his or her person, or his or her reputation” has been threatened.

In an example of the issues at stake, plaintiffs in one of the five cases filed suit against Honda in 2021, arguing that beginning in at least 2014 infotainment systems in the company’s vehicles began downloading and storing a copy of all text messages on smartphones when they were connected to the system.

An Annapolis, Maryland-based company, Berla Corporation, provides the technology to some car manufacturers but does not offer it to the general public, the lawsuit said. Once messages are downloaded, Berla’s software makes it impossible for vehicle owners to access their communications and call logs but does provide law enforcement with access, the lawsuit said.

Many car manufacturers are selling car owners’ data to advertisers as a revenue boosting tactic, according to earlier reporting by Recorded Future News. Automakers are exponentially increasing the number of sensors they place in their cars every year with little regulation of the practice.

Source: Court rules automakers can record and intercept owner text messages

WhatsApp will let you hide your IP address from whoever you call

A new feature in WhatsApp will let you hide your IP address from whoever you call using the app. Knowing someone’s IP address can reveal a lot of personal information such as their location and internet service provider, so having the option to hide it is a major privacy win. “This new feature provides an additional layer of privacy and security geared towards our most privacy-conscious users,” WhatsApp wrote in a blog post.

WhatsApp currently relays calls either through its own servers or by establishing a direct connection called peer-to-peer with whoever you are calling depending on network conditions. Peer-to-peer calls often provide better voice quality, but require both devices to know each other’s IP addresses.

Once you turn the new feature, known simply as “Protect IP address in calls” on, however, WhatsApp will always relay your calls through its own servers rather than establishing a peer-to-peer connection, even if it means a slight hit to sound quality. All calls will continue to remain end-to-end encrypted, even if they go through WhatsApp’s servers, the company said.

WhatsApp has been adding more privacy features over the last few months. In June, the company added a feature that let people automatically silence unknown callers. It also introduced a “Privacy Checkup” section to allow users to tune up a host of privacy settings from a single place in the app, and earlier this year, added a feature that lets people lock certain chats with a fingerprint or facial recognition.

Source: WhatsApp will let you hide your IP address from whoever you call

So this means that Meta / Facebook / Whatsapp will now know who you are calling with, once you turn this privacy feature on. So to gain some privacy towards the end caller, you sacrifice privacy towards Meta.

In other news, it’s easy to find the IP address of someone you are whatsapping with

Capcom: PC Game Mods Are Essentially Just Cheats By A Different Name – uhm… what’s wrong with cheats (if it’s offline)?

It truly is amazing that the video game industry is so heavily divided on the topic of user-made game mods. I truly don’t understand it. My take has always been very simple: mods are good for gamers and even better for game makers. Why? Simple, mods serve to extend the useful life of video games by adding new ways to play them and therefore making them more valuable, they can serve to fix or make better the original game thereby doing some of the game makers work for them for free, and can simply keep a classic game relevant decades later thanks to a dedicated group of fans of a franchise that continues to be a cash cow to this day.

On the other hand are all the studios and publishers that somehow see mods as some kind of threat, even outside of the online gaming space. Take Two, Nintendo, EA: the list goes on and on and on. In most of those cases, it simply appears that control is preferred by the publisher over building an active community and gaining all the benefits that come along with that modding community.

And then there’s Capcom, which recently made some statements essentially claiming that for all practical purposes mods are just a different form of cheating and that mods hurt the gaming experience for the public.

As spotted by GamesRadar, during an October 25 Capcom R&D presentation about its game engine, cheating, and piracy, the company claims that mods are “no different” than cheats, and that they can hurt game development.

“For the purposes of anti-cheat and anti-piracy, all mods are defined as cheats,” Capcom explained. The only exception to this are mods which are “officially” supported by the developer and, as Capcom sees it, all user-created mods are “internally” no different than cheating.

Capcom goes on to say that some mods with offensive content can be “detrimental” to a game or franchise’s reputation. The publisher also explained that mods can create new bugs and lead to more players needing support, stretching resources, and leading to increased game development costs or even delays. (I can’t help but feel my eyes starting to roll…)

I’m sorry, but just… no. No to pretty much all of this. Mods do not need to be defined as cheats, particularly in offline single player games. Mods are mods, cheats are cheats. There are a zillion different aesthetic and/or quality of life mods that exist for hundreds of games that fall into this category. Skipping intro videos for games, which I do in Civilization, cannot possibly be equated to cheating within the game, but that’s a mod.

As to the claim that mods increase development time because support teams have to handle requests from people using mods that are causing problems within the games… come on, now. Support and dev teams are very distinct and I refuse to believe this is a big enough problem to even warrant a comment.

As to offensive mods, here I have some sympathy. But I also have a hard time believing that the general public is really looking with narrow eyes at publishers of games because of what third-party mods do to their product. Mods like that exist for all kinds of games and those publishers and developers appear to be getting on just fine.

Whatever the reason behind Capcom’s discomfort with mods, it should think long and hard about its stance and decide whether it’s valid. We have seen time and time again examples of modding communities being a complete boon to publishers and I see no reason why Capcom should be any different.

Source: Capcom: PC Game Mods Are Essentially Just Cheats By A Different Name | Techdirt

So they allow people to play the game in new and unexpected ways. The same does go for cheats. Sometimes you just don’t have the patience to do that boss fight for the 100th time. Sometimes you just want to get through the game. Sometimes you want to play that super 1/1000 drop chance rare item. If you’re not online, then mod and cheat the hell out of the game. It yours! You paid for it, installed the code on your hard drive. It’s out of the hands of the publisher.