Report finds most subscription services manipulate customers with ‘dark patterns’

Most subscription sites use “dark patterns” to influence customer behavior around subscriptions and personal data, according to a pair of new reports from global consumer protection groups. Dark patterns are “practices commonly found in online user interfaces [that] steer, deceive, coerce or manipulate consumers into making choices that often are not in their best interests.” The international research efforts were conducted by the International Consumer Protection and Enforcement Network (ICPEN) and the Global Privacy Enforcement Network (GPEN).

The ICPEN conducted the review of 642 websites and mobile apps with a subscription component. The assessment revealed one dark pattern in use at almost 76 percent of the platforms, and multiple dark patterns at play in almost 68 percent of them. One of the most common dark patterns discovered was sneaking, where a company makes potentially negative information difficult to find. ICPEN said 81 percent of the platforms with automatic subscription renewal kept the ability for a buyer to turn off auto-renewal out of the purchase flow. Other dark patterns for subscription services included interface interference, where desirable actions are easier to perform, and forced action, where customers have to provide information to access a particular function.

The companion report from GPEN examined dark patterns that could encourage users to compromise their privacy. In this review, nearly all of the more than 1,000 websites and apps surveyed used a deceptive design practice. More than 89 percent of them used complex and confusing language in their privacy policies. Interface interference was another key offender here, with 57 percent of the platforms making the least protective privacy option the easiest to choose and 42 percent using emotionally charged language that could influence users.

Even the most savvy of us can be influenced by these subtle cues to make suboptimal decisions. Those decisions might be innocuous ones, like forgetting that you’ve set a service to auto-renew, or they might put you at risk by encouraging you to reveal more personal information than needed. The reports didn’t specify whether the dark patterns were used in illicit or illegal ways, only that they were present. The dual release is a stark reminder that digital literacy is an essential skill.

Source: Report finds most subscription services manipulate customers with ‘dark patterns’

The US Supreme Court’s Contempt for Facts Is a Betrayal of Justice

When the Supreme Court’s Ohio v. EPA decision blocked Environmental Protection Agency limits on Midwestern states polluting their downwind neighbors, a sad but telling coda came in Justice Neil Gorsuch’s opinion. In five instances, it confused nitrogen oxide, a pollutant that contributes to ozone formation, with nitrous oxide, better known as laughing gas.

You can’t make this stuff up. This repeated mistake in the 5-4 decision exemplifies a high court not just indifferent to facts but contemptuous of them.

Public trust in the Supreme Court, already at a historic low, is now understandably plunging. In the last four years, a reliably Republican majority on the high court, led by Chief Justice John Roberts, has embarked on a remarkable spree against history and reality itself, ignoring or eliding facts in decisions involving school prayer, public health, homophobia, race, climate change, abortion and clean water, not to mention the laughing gas case.

The crescendo to this assault on expertise landed in June, when the majority’s Chevron decision arrogated to the courts regulatory calls that have been made by civil servant scientists, physicians and lawyers for the last 40 years. (With stunning understatement, the Associated Press called it “a far-reaching and potentially lucrative victory to business interests.” No kidding.) The decision enthrones the high court—an unelected majority—as a group of technically incompetent, in some cases corrupt, politicos in robes with power over matters that hinge on vital facts about pollution, medicine, employment and much else. These matters govern our lives.

The 2022 Kennedy v. Bremerton School District school prayer decision hinged on a fable of a football coach offering “a quiet personal prayer,” in the words of the opinion. In reality, this coach was holding overt post-game prayer meetings on the 50-yard line, ones that an atheist player felt compelled to attend to keep off the bench. Last year’s 303 Creative v. Elenis decision, allowing a Web designer to discriminate against gay people, revolved entirely on a supposed request for a gay wedding website that never existed that (supposedly) came from a straight man who never made the request. Again, you can’t make this stuff up. Unless you are on the Supreme Court. Then it becomes law.

Summing up the Court’s term on July 1, the legal writer Chris Geidner called attention to a more profound “important and disturbing reality” of the current majority’s relationship to facts. “When it needs to decide a matter for the right, it can and does accept questionable, if not false, claims as facts. If the result would benefit the left, however, there are virtually never enough facts to reach a decision.”

The “laughing gas” decision illustrates this nicely: EPA had asked 23 states to submit a state-based plan to reduce their downwind pollution. Of those, 21 proposed to do nothing to limit their nitrogen (not nitrous) oxide emissions. Two others didn’t even respond to that extent. Instead of telling the states to cut their pollution as required by law, the Court’s majority invented a new theoretical responsibility for EPA—to account for future court cases keeping a state out of its Clean Air Act purview—and sent the case back to an appeals court.

Source: The Supreme Court’s Contempt for Facts Is a Betrayal of Justice | Scientific American

And that’s not even talking about giving sitting presidents immunity from criminal behaviour either!

Speed limiters arrive for all new cars in the European Union

It was a big week for road safety campaigners in the European Union as Intelligent Speed Assistance (ISA) technology became mandatory on all new cars.

The rules came into effect on July 7 and follow a 2019 decision by the European Commission to make ISA obligatory on all new models and types of vehicles introduced from July 2022. Two years on, and the tech must be in all new cars.

European legislators reckon that the rules will make for safer roads. However, they will also add to the ever-increasing amount of technology rolling around the continent’s highways. While EU law has no legal force in the UK, it’s hard to imagine many manufacturers making an exemption for Britain.

So how does it work? In the first instance, the speed limit on a given road can be detected by using data from a Global Navigation Satellite System (GNSS) – such as Global Positioning System (GPS) – and a digital map to come up with a speed limit. This might be combined with physical sign recognition.

If the driver is being a little too keen, the ISA system must notify them that the limit has been exceeded but, according to the European Road Safety Charter “not to restrict his/her possibility to act in any moment during driving.”

“The driver is always in control and can easily override the ISA system.”

There are four options available to manufacturers according to the regulations. The first two, a cascaded acoustic or vibrating warning, don’t intervene, while the latter two, haptic feedback through the acceleration pedal and a speed limiter, will. The European Commission noted, “Even in the case of speed control function, where the car speed will be automatically gently reduced, the system can be smoothly overridden by the driver by pressing the accelerator pedal a little bit deeper.”

The RAC road safety spokesperson Rod Dennis said: “While it’s not currently mandated that cars sold in the UK have to be fitted with Intelligent Speed Assistance (ISA) systems, we’d be surprised if manufacturers deliberately excluded the feature from those they sell in the UK as it would add unnecessary cost to production.”

This writer has driven a car equipped with the technology, and while it would be unfair to name and shame particular manufacturers, things are a little hit-and-miss. Road signs are not always interpreted correctly, and maps are not always up to date, meaning the car is occasionally convinced that the speed limit differs from reality, with various beeps and vibrations to demonstrate its belief.

Dennis cautioned, “Anyone getting a new vehicle would be well advised to familiarise themselves with ISA and how it works,” and we would have to agree.

While it is important to understand that the technology is still a driver aid and can easily be overridden, it is not hard to detect the direction of travel.

Source: Speed limiters arrive for all new cars in the European Union • The Register

Paramount Axes Decades Of Comedy Central History In Latest Round Of Brunchlord Dysfunction

Last month we noted how the brunchlords in charge of Paramount (CBS) decided to eliminate decades of MTV News journalism history as part of their ongoing “cost saving” efforts. It was just the latest casualty in an ever-consolidating and very broken U.S. media business routinely run by some of the least competent people imaginable.

We’ve noted how with streaming growth slowing, there’s no longer money to be made goosing stock valuations via subscriber growth. So media giants (and the incompetent brunchlords that usually fail upward into positions of unearned power within them) have turned their attention to all the usual tricks: layoffs, pointless megamergers, price hikes, and more and more weird and costly consumer restrictions.

Part of that equation also involves being too cheap to preserve history, as we’ve seen countless times when a journalism or media company implodes and then immediately disappears not just staffers but decades of their hard work. Usually (and this is from my experience as a freelancer) without any warning or consideration of the impact whatsoever.

Paramount has been struggling after its ingenious strategy of making worse and worse streaming content while charging more and more money somehow hasn’t panned out. While the company looks around for merger and acquisition partners, they’ve effectively taken a hatchet to company staff and history.

First with the recent destruction of the MTV News archives and a major round of layoffs, and now with the elimination of years of Comedy Central history. Last week, as part of additional cost cutting moves, the company basically gutted the Comedy Central website, eliminating years of archived video history of numerous programs ranging from old South Park clips to episodes of the The Colbert Report.

A website message and press statement by the company informs users that they can simply head over to the Paramount+ streaming app to watch older content:

As part of broader website changes across Paramount, we have introduced more streamlined versions of our sites, driving fans to Paramount+ to watch their favorite shows.”

Except older episodes of The Daily Show and The Colbert Report can no longer be found on Paramount+, also due to layoffs and cost cutting efforts at the company. Paramount is roughly $14 billion in debt due to mismanagement, and a recent plan to merge with Skydance was scuttled at the last second.

Eventually Paramount will find somebody else to merge with in order to bump stock valuations, nab a fat tax cut, and justify excessive executive compensation (look at me, I’m a savvy dealmaker!). At which point, as we saw with the disastrous AT&T–>Time Warner–>Discovery series of mergers, an entirely new wave of layoffs, quality erosion, and chaos will begin as they struggle to pay off deal debt.

It’s all so profoundly pointless, and at no point does anything like product quality, customer satisfaction, employee welfare, or the preservation of history enter into it. The executives spearheading this repeated trajectory from ill-conceived business models to mindless mergers will simply be promoted to bigger and better ventures because there’s simply no financial incentive to learn from historical missteps.

The executives at the top of the heap usually make out like bandits utterly regardless of competency or outcomes, so why change anything?

Source: Paramount Axes Decades Of Comedy Central History In Latest Round Of Brunchlord Dysfunction | Techdirt

Why You Should Consider Proton Docs Over Google

Proton has officially launched Docs in Proton Drive, a new web-based productivity app that gives you access to a fully-featured text editor with shared editing capabilities and full end-to-end encryption. It’s meant to take on Google Docs—one of the leading online word processors in the world, and make it more convenient to use Proton’s storage service. But how exactly does Proton’s document editor compare to Google’s? Here’s what you need to know.

Docs in Proton Drive has a familiar face

On the surface, Docs in Proton Drive—or Proton Docs as some folks have begun calling it for simplicity’s sake—looks just like Google Docs. And that’s to be expected. Text editors don’t have much reason to stray from the same basic “white page with a bunch of toolbars” look, and they all offer the same types of tools like headlines, bullet points, font changes, highlighting, etc.

[…]

The difference isn’t in the app itself

[…]

Proton has built its entire business around the motto of “privacy first,” and that extends to the company’s latest software offerings, too. Docs in Proton Drive includes complete end-to-end encryption—down to your cursor movements—which means nobody, not even Proton, can track what you’re doing in your documents. They’re locked down before they even reach Proton’s servers.

This makes the product very enticing for businesses that might want to keep their work as private as possible while also still having the same functionality as Google Docs—because Proton isn’t missing any of the functionality that Google Docs offers, aside from the way that Google Docs integrates with the rest of the Google Suite of products.

That’s not to say that Google isn’t secure. Google does utilize its own level of encryption when storing your data in the cloud. However, it isn’t completely end-to-end encrypted, so Google has open access to your data. Google says it only trains its generative AI on “publicly accessible” information, and while that probably won’t affect most people, it is a pain point for many, especially as the company does make exceptions for features like Smart Compose.

That worry is why products with end-to-end encryption have become such a commodity in recent years—especially as cybersecurity risks continue to rise, meaning you have to trust the companies who store your data even more. Proton’s advantage is that it promises to NEVER use your content for any purpose—and those aren’t empty words. Because the company doesn’t have access to your content, it couldn’t use it even if it wanted to.

[…]

Source: Why You Should Consider Proton Docs Over Google | Lifehacker

Proton Docs is a privacy-focused answer to Google Docs and Microsoft Word

Proton Docs looks a lot like Google Docs: white pages, formatting toolbar at the top, live indicators showing who’s in the doc with their name attached to a cursor, the whole deal. That’s not especially surprising, for a couple of reasons. First, Google Docs is hugely popular, and there are only so many ways to style a document editor anyway. Second, Proton Docs exists in large part to be all the things that are great about Google Docs — just without Google in the mix.

Docs is launching today inside of Proton Drive, as the latest app in Proton’s privacy-focused suite of work tools. The company that started as an email client now also includes a calendar, a file storage system, a password manager, and more. Adding Docs to the ecosystem makes sense for Proton as it tries to compete with Microsoft Office and Google Workspace and seemed to be clearly coming soon after Proton acquired Standard Notes in April. Standard Notes isn’t going away, though, Proton PR manager Will Moore tells me — it’s just that Docs is borrowing some features.

The first version of Proton Docs seems to have most of what you’d expect in a document editor: rich text options, real-time collaborative editing, and multimedia support. (If Proton can handle image embeds better than Google, it might have a hit on its hands just for that.) It’s web-only and desktop-optimized for now, though Moore tells me it’ll eventually come to other platforms. “Everything that Google’s got is on our roadmap,” he says.

A screenshot of multiple editors in Proton Docs.
Imagine Google Docs… there, that’s it. You know what Proton Docs looks like.Image: Proton

Since this is a Proton product, security is everything: the company says every document, keystroke, and even cursor movement is end-to-end encrypted in real time. Proton has long promised to never sell or otherwise use your user data

[…]

Source: Proton Docs is a privacy-focused answer to Google Docs and Microsoft Word – The Verge

Spain introduces porn passport – really wants to know what you are watching and especially how often erm… no… *cough* to stop kids from watching smut

The Spanish government has a plan to prevent kids from watching porn online: Meet the porn passport.

Officially (and drily) called the Digital Wallet Beta (Cartera Digital Beta), the app Madrid unveiled on Monday would allow internet platforms to check whether a prospective smut-watcher is over 18. Porn-viewers will be asked to use the app to verify their age. Once verified, they’ll receive 30 generated “porn credits” with a one-month validity granting them access to adult content. Enthusiasts will be able to request extra credits.

While the tool has been criticized for its complexity, the government says the credit-based model is more privacy-friendly, ensuring that users’ online activities are not easily traceable.

The system will be available by the end of the summer. It will be voluntary, as online platforms can rely on other age-verification methods to screen out inappropriate viewers. It heralds an EU law going into force in October 2027, which will require websites to stop minors from accessing porn.

Eventually, Madrid’s porn passport is likely to be replaced by the EU’s very own digital identity system (eIDAS2) — a so-called wallet app allowing people to access a smorgasbord of public and private services across the whole bloc.

“We are acting in advance and we are asking platforms to do so too, as what is at stake requires it,” José Luis Escrivá, Spain’s digital secretary, told Spanish newspaper El País.

Source: Spain introduces porn passport to stop kids from watching smut – POLITICO

Every time they mention kids, have a really good look at how much more they are spying on you and controlling your actions.

Apple bows to Kremlin pressure to remove leading VPNs from Russian AppStore – in other news, Apple still active in Russia

Apple has removed several apps offering virtual private network (VPN) services from the Russian AppStore, following a request from Roskomnadzor, Russia’s media regulator, independent news outlet Mediazona reported on Thursday.

The VPN services removed by Apple include leading services such as ProtonVPN, Red Shield VPN, NordVPN and Le VPN. Those living in Russia will no longer be able to download the services, while users who already have them on their phones can continue using them, but will be unable to update them.

Red Shield VPN posted a notice from Apple on X, which said that their app would be removed following a request from Roskomnadzor, “because it includes content that is illegal in Russia”.

Since the start of the Russian invasion of Ukraine in February 2022, the Kremlin has introduced strict online censorship and has blocked numerous independent media outlets and popular social media apps such as Facebook, Instagram and X.

As a result, anyone wanting to access blocked sites from Russia is forced to use a VPN, a protective tunnel that encrypts internet traffic and changes a user’s IP address.

[…]

Source: Apple bows to Kremlin pressure to remove leading VPNs from Russian AppStore — Novaya Gazeta Europe

Supreme Court overrules Chevron, kneecapping federal regulators

On Friday, the Supreme Court overturned a long-standing legal doctrine in the US, making a transformative ruling that could hamper federal agencies’ ability to regulate all kinds of industry. Six Republican-appointed justices voted to overturn the doctrine, called Chevron deference, a decision that could affect everything from pollution limits to consumer protections in the US.

Chevron deference allows courts to defer to federal agencies when there are disputes over how to interpret ambiguous language in legislation passed by Congress. That’s supposed to lead to more informed decisions by leaning on expertise within those agencies. By overturning the Chevron doctrine, the conservative-dominated SCOTUS decided that judges ought to make the call instead of agency experts.

“Perhaps most fundamentally, Chevron’s presumption is misguided because agencies have no special competence in resolving statutory ambiguities. Courts do,” Chief Justice John Roberts writes in his opinion.

The decision effectively strips federal agencies of a tool they’ve been able to use to take action on pressing issues while Congress tries to catch up with new laws. Chevron deference has come up, for instance, in efforts to use the 1970 Clean Air Act to prevent the greenhouse gas emissions that cause climate change. Overturning it is a big win for lobbyists and anyone else who might want to make it harder to crack down on industry through federal regulation.

“It would really unleash a kind of chaotic period of time where federal courts are deciding what they think all these laws mean. And that can lead to a lot of inconsistency and confusion for agencies and for regulated parties,” Jody Freeman, director of the Environmental and Energy Law Program at Harvard, previously told The Verge when SCOTUS heard oral arguments over Chevron deference in January.

[…]

In her dissent, Justice Elena Kagan wrote that Chevron deference “has formed the backdrop against which Congress, courts, and agencies — as well as regulated parties and the public — all have operated for decades. It has been applied in thousands of judicial decisions. It has become part of the warp and woof of modern government, supporting regulatory efforts of all kinds — to name a few, keeping air and water clean, food and drugs safe, and financial markets honest.”

[…]

The fate of net neutrality in the US, for instance, has been tied to Chevron deference. Courts have previously deferred to the FCC on how to define broadband. Is it considered a telecommunications or information service? If it’s telecommunications, then it’s subject to “common carrier” regulations and restrictions placed on public utilities to ensure fair access. The FCC has flip-flopped on the issue between the Obama, Trump, and Biden administrations — with the FCC deciding in April to restore net neutrality rules.

The Supreme Court’s decision risks bogging down courts with all these nitty-gritty questions. They used to be able to punt much of that over to federal agencies, a move that’s out of the playbook now.

[…]

Source: Supreme Court overrules Chevron, kneecapping federal regulators – The Verge

The US supreme court is really going nuts, having just decided that bribery is OK: Corrupt US supreme court thinks corruption is not corrupt and just basically legalized bribery

The Pentagon’s Antivaccine Propaganda Endangered Public Health and Tarnished U.S. Credibility

According to a June Reuters exposé, the Pentagon ran a secret antivaccine campaign in several developing countries at the height of the pandemic in 2020. Why? “To sow doubt about the efficacy of vaccines and other life-saving aid that was being supplied by China,” Reuters reported. Trump’s secretary of defense signed off on it; the Biden administration discontinued the program shortly after taking office. The Pentagon launched its propaganda operation in the Philippines (as COVID was raging), where it set up fake anti-vax accounts on social media. A military officer involved with the Pentagon’s psyop told Reuters: “We weren’t looking at this from a public health perspective. We were looking at how we could drag China through the mud.”

Such cavalier thinking has lethal consequences in the infodemic era. Timothy Caulfield, a University of Alberta public policy expert, put this bluntly in an interview with Scientific American: “The United States government made a conscious decision to spread misinformation that killed people.”

Is he being hyperbolic? Well, health experts are quite certain that antivaccine rhetoric proved deadly during the coronavirus pandemic and that, in the U.S., politicized misinformation led to COVID deaths in the hundreds of thousands. What fueled much of this antivaccine discourse? Conspiracy narratives about microchips and vaccine-risk cover-ups as well as other villainous plots to control humanity by governments or global institutions. Yes, it was bonkers. But now we know that when health authorities were desperately trying to tamp down these fears, the Pentagon was running its own conspiracy operation to discredit vaccines–just so it could score points against China. The revelation is a “worst case scenario story” for the global public health community, says Caulfield, “because it demonstrates that anti-vax misinformation was being spread by the government, and it reinforces people’s distrust in institutions.”

The fallout from the military’s covert psyop will reverberate on multiple levels. “When democratic governments employ this kind of information operation, they undermine the values and trust that sustain democracies,” says Kate Starbird, a disinformation expert at the University of Washington. Similarly the economist Alex Tabarrok writes that the Pentagon’s antivaccine campaign has “undermined U.S. credibility on the global stage and eroded trust in American institutions.” (No doubt, but the latter has been on a precipitous decline for a while.)

The question now is: What can be done to prevent something like this happening again? International development economist Charles Kenny says it’s time to “ban intelligence operations from interfering in public health.” That would be a welcome start, but let’s not hold our breath. We’ve been down this road before: In 2011, the CIA used a fake hepatitis vaccination program to search for Osama bin Laden in Pakistan. After the ploy came to light several years later, terrorists murdered legitimate polio vaccine workers, and there was a resurgence of polio in the population. In 2014 the White House vowed the CIA would no longer use vaccine programs as a cover for spy operations. Here we are a decade later, however, and it appears the Pentagon wasn’t bound by that promise and won’t be keeping it in the future.

The U.S. government’s past ignoble deceptions of its own citizens should have served plenty of warning that this is foolish. We owe today’s UFO craze to the cover-up of a military balloon crash in 1947, only acknowledged decades later by the U.S. Air Force. More seriously, during the cold war, the CIA secretly funded a slew of American cultural and political organizations to (unwittingly) help wage its propaganda campaign against the Soviet Union, promoting favored artists in commissar like fashion. Then U.S. secretary of state Colin Powell touted completely fallacious “weapons of mass destruction” buncombe to the United Nations to justify the botched invasion of Iraq in 2003. Now overlay this with the vaccine deceptions used by America’s spymasters in Pakistan and more recently in the Philippines. It makes for a confusing lens to view a world overrun with fake news, bots and troll armies.

John Lisle, a University of Texas historian who researches cold war science and the intelligence community, says that the Pentagon should have learned from history before undertaking its recent antivaccine disinformation campaign. “It may have been intended to make Filipinos distrust China, but its legacy will be to make Americans distrust the government.”

Source: The Pentagon’s Antivaccine Propaganda Endangered Public Health and Tarnished U.S. Credibility | Scientific American

Before George Bush the younger it would have seemed beyond belief that stupidity of this kind was possible. But since the US has descended into unimaginable lows with their presidential choices and policies with the amount of corruption that has accompanied this, it almost seems like something you kind of shrug at.

Apple set to pay away Batterygate and audio defect lawsuits for pocket change

Apple is preparing to settle two lawsuits next month over alleged iPhone flaws, provided the respective judges agree to the terms of the deals.

The first planned settlement, for In re Apple Inc. Stockholder Derivative Litigation, 4:19-cv-05153-YGR, aims to resolve investor pique over the impact of “Batterygate” on Apple stock.

Filed in 2019, the case [PDF] seeks compensation for unexplained iPhone shutdowns that started occurring in 2016 as a result of battery aging that left devices unable to handle processing demands.

“Instead of alerting customers about this solution, beginning in January 2017, Apple published iOS updates that secretly ‘fixed’ the shutdown issues by dramatically slowing the performance of older iPhone models without the owner’s knowledge or consent,” the initial complaint alleged.

“These updates silently introduced a trade-off between battery life and performance reduction without informing iPhone owners that a simple $79 replacement battery would restore both.”

This was something of a scandal at the time and led to a fine of $11.4 million from Italian regulators in 2018, a $113 million penalty extracted by 34 US states, consumer litigation that led to a settlement of $310-$500 million, and a fine of about $27 million in France. There’s also a UK claim for up to £853 million ($1.03 billion) that has yet to be resolved.

Apple investors now stand to recoup a paltry $6 million if Judge Yvonne Gonzalez Rogers approves the deal [PDF] in a hearing scheduled for July 16, 2024. That would be almost 0.002 percent of the $383.29 billion in revenue Apple collected in 2023.

The settlement, disclosed to investors in May, requires Apple to notify customers in a clear and conspicuous way when it makes changes to iOS Performance Management. And alongside increased commitments to transparency – traditionally not Apple’s strong suit – it imposes verification obligations on its chief compliance officer.

The second claim awaiting settlement approval is Tabak, et al. v. Apple Inc., 4:19-CV-02455-JST, a lawsuit over an alleged audio chip defect in Apple’s iPhone 7 and 7 Plus models that resulted in intermittent sound issues.

According to the complaint, the alleged defect was caused by solder that failed to adhere to the logic board when stressed, thereby breaking the electrical connection between the audio chip and board.

Apple has denied the allegations, but to be rid of the litigation is willing to pay $35 million to resolve the claim, provided Judge Jon Tigar approves the arrangement in a hearing scheduled for July 18.

If the deal goes through, affected members of the class could receive payments ranging from $50 to $349 for their trouble. Of the 1,649,497 Settlement Class Members, 114,684 payment forms have been submitted to the claim administrator. Those notified of membership in the class have until July 3 to respond.

Source: Apple set to pay away Batterygate and audio defect lawsuits • The Register

EU’s ‘Going Dark’ Expert Group Publishes 42-Point Surveillance Plan For Access To All Devices And Data At All Times

Techdirt has been covering the disgraceful attempts by the EU to break end-to-end encryption — supposedly in order to “protect the children” — for two years now. An important vote that could have seen EU nations back the proposal was due to take place recently. The vote was cancelled — not because politicians finally came to their senses, but the opposite. Those backing the new law were worried the latest draft might not be approved, and so removed it from the agenda, to allow a little more backroom persuasion to be applied to holdouts.

Although this “chat control” law has been the main focus of the EU’s push for more surveillance of innocent citizens, it is by no means the end of it. As the German digital rights site Netzpolitik reports, work is already underway on further measures, this time to address the non-existent “going dark” threat to law enforcement:

The group of high-level experts had been meeting since last year to tackle the so-called „going dark“ problem. The High-Level Group set up by the EU was characterized by a bias right from the start: The committee is primarily made up of representatives of security authorities and therefore represents their perspective on the issue.

Given the background and bias of the expert group, it’s no surprise that its report, “Recommendations from the High-Level Group on Access to Data for Effective Law Enforcement”, is a wish-list of just about every surveillance method. The Pirate Party Member of the European Parliament Patrick Breyer has a good summary of what the “going dark” group wants:

according to the 42-point surveillance plan, manufacturers are to be legally obliged to make digital devices such as smartphones, smart homes, IoT devices, and cars monitorable at all times (“access by design”). Messenger services that were previously securely encrypted are to be forced to allow for interception. Data retention, which was overturned by the EU Court of Justice, is to be reenacted and extended to OTT internet communications services such as messenger services. “At the very least”, IP connection data retention is to be required to be able to track all internet activities. The secure encryption of metadata and subscriber data is to be prohibited. Where requested by the police, GPS location tracking should be activated by service providers (“tracking switch”). Uncooperative providers are to be threatened with prison sentences.

It’s an astonishing list, not least for the re-appearance of data retention, which was thrown out by the EU’s highest court in 2014. It’s a useful reminder that even when bad laws are overturned, constant vigilance is required to ensure that they don’t come back at a later date.

Source: EU’s ‘Going Dark’ Expert Group Publishes 42-Point Surveillance Plan For Access To All Devices And Data At All Times | Techdirt

These people don’t seem to realise that opening this stuff up for law enforcement (who do misuse their powers), also opens it up to criminals.

Julian Assange to finally go free in guilty plea deal with US

WikiLeaks founder Julian Assange has been freed from prison in the UK after agreeing to plead guilty to just one count of conspiracy to obtain and disclose national defense information, brought against him by the United States. Uncle Sam previously filed more than a dozen counts.

Assange has spent the past five years in a British super-max battling against extradition to the US to face trial for publicly leaking various classified government files via his website.

He is now set to return to his native Australia as a free man once he’s appeared in a US federal court this week to enter a guilty plea.

Assange’s whistleblower organization on Monday confirmed the activist had “left Belmarsh maximum security prison” earlier that day after being “granted bail by the High Court in London.” We’re told he was released at Stansted airport, where he boarded a plane to leave the UK.

His destination appears to be the Northern Mariana Islands, a US territory in the Pacific. A letter [PDF] from the US Department of Justice’s National Security Division dated June 24 states the WikiLeaker will appear before a federal district judge on the islands on Wednesday to admit the allegation against him.

After that, he is expected to be allowed to leave for Australia. Whatever sentence the federal district court decides is expected to have elapsed due to time already served, allowing him to go free.

[…]

At the time of writing, the US, UK, and Australian authorities all appear to be silent on how and why the plea deal was struck. However it appears to have been in the works for some time: A video posted at around 0100 on Monday, UK time, and dated June 19 features Stella Assange – Julian’s wife – saying she expects his release within a week. The video also featured Kristinn Hrafnsson, WikiLeaks editor-in-chief, saying he expects Assange’s imminent release.

Reduced charges

The US had sought to extradite Assange to face 18 charges, but the latest filing [PDF] against him lists just one charge: Conspiracy to obtain and disclose national defense information.

That charge was listed in a superseding indictment issued by the US Attorney’s Office in 2022, along with charges including conspiracy to commit computer intrusions, obtaining national defense information, and disclosure of national defense information.

The absence of the last charge is surely notable – Assange demonstrably did disclose such information, but he and WikiLeaks have long argued that doing so was an act of journalism done in the public interest and therefore justifiable.

The fresh court filing details the sole remaining charge, which it spells out as Assange having “knowingly and unlawfully conspired” with WikiLeaks source Chelsea Manning to commit three offenses against the United States, namely:

  • To receive and obtain documents, writings, and notes connected with the national defense, including such materials classified up to the SECRET level, for the purpose of obtaining information respecting the national defense, and knowing and with reason to believe at the time such materials were received and obtained, they had been and would be taken, obtained, and disposed of by a person contrary to the provisions of Chapter 37 of Title 18 of the United States Code, in violation of Title 18, United States Code, Section 793(c);
  • To willfully communicate documents relating to the national defense, including documents classified up to the SECRET level, from persons having lawful possession of or access to such documents, to persons not entitled to receive them, in violation of Title 18, United States Code. Section 793(d); and
  • To willfully communicate documents relating to the national defense from persons in unauthorized possession of such documents to persons not entitled to receive them, in violation of Title 18, United States Code, Section 793(e).

Private Manning was collared and jailed for 35 years in 2013 for illegally passing classified military intelligence to Assange to leak – most notably the Cablegate files – a sentence commuted by President Obama in 2017.

[…]

Source: Julian Assange to go free in guilty plea deal with US • The Register

Windows 11 is now automatically enabling OneDrive folder backup without asking permission

Microsoft has made OneDrive slightly more annoying for Windows 11 users. Quietly and without any announcement, the company changed Windows 11’s initial setup so that it could turn on the automatic folder backup without asking for it.

Now, those setting up a new Windows computer the way Microsoft wants them to (in other words, connected to the internet and signed into a Microsoft account) will get to their desktops with OneDrive already syncing stuff from folders like Desktop Pictures, Documents, Music, and Videos. Depending on how much is stored there, you might end up with a desktop and other folders filled to the brim with shortcuts to various stuff right after finishing a clean Windows installation.

Automatic folder backup in OneDrive is a very useful feature when used properly and when the user deliberately enables it. However, Microsoft decided that sending a few notification prompts to enable folder backup was not enough, so it just turned the feature on without asking anybody or even letting users know about it, resulting in a flood of Reddit posts about users complaining about what the hell are those green checkmarks next to files and shortcuts on their desktops.

If you do not want your computer to back up everything on your desktop or other folders, here is how to turn the feature off (you can also set up Windows 11 in offline mode):

  1. Right-click the OneDrive icon in the tray area, click the settings icon and then press Settings.
  2. Go to the “Sync and Backup” tab and click “Manage backup.”
  3. Turn off all the folders you do not want to back up in OneDrive and confirm the changes.
  4. If you have an older OneDrive version with the classic tabbed interface, go to the Backup tab and click Manage Backup > Stop backup > Stop backup.

Microsoft is no stranger to shady tricks with its software and operating system. Several months ago, we noticed that OneDrive would not let you close it without you explaining the reason first (Microsoft later reverted that stupid change). A similar thing was also spotted in the Edge browser, with Microsoft asking users why they downloaded Chrome.

As a reminder, you can always just uninstall OneDrive and call it a day.

Source: Windows 11 is now automatically enabling OneDrive folder backup without asking permission – Neowin

Record labels sue AI music generators for ‘massive infringement of recorded music’

Major music labels are taking on AI startups that they believe trained on their songs without paying. Universal Music Group, Warner Music Group and Sony Music Group sued the music generators Suno and Udio for allegedly infringing on copyrighted works on a “massive scale.”

The Recording Industry Association of America (RIAA) initiated the lawsuits and wants to establish that “nothing that exempts AI technology from copyright law or that excuses AI companies from playing by the rules.”

The music labels’ lawsuits in US federal court accuse Suno and Udio of scraping their copyrighted tracks from the internet. The filings against the AI companies reportedly demand injunctions against future use and damages of up to $150,000 per infringed work. (That sounds like it could add up to a monumental sum if the court finds them liable.) The suits appear aimed at establishing licensed training as the only acceptable industry framework for AI moving forward — while instilling fear in companies that train their models without consent.

Screenshot of the Udio AI music generator homescreen.
Udio

Suno AI and Udio AI (Uncharted Labs run the latter) are startups with software that generates music based on text inputs. The former is a partner of Microsoft for its CoPilot music generation tool. The RIAA claims the services’ reproduced tracks are uncannily similar to existing works to the degree that they must have been trained on copyrighted songs. It also claims the companies didn’t deny that they trained on copyright works, instead shielding themselves behind their training being “confidential business information” and standard industry practices.

According to The Wall Street Journal, the lawsuits accuse the AI generators of creating songs that sounded remarkably similar to The Temptations’ “My Girl,” Green Day’s “American Idiot,” and Mariah Carey’s “All I Want for Christmas Is You,” among others. They also claim the AI services produced indistinguishable vocals from artists like Lin-Manuel Miranda, Bruce Springsteen, Michael Jackson and ABBA.

Wired reports that one example cited in the lawsuit details how one of the AI tools reproduced a song that sounded nearly identical to Chuck Berry’s pioneering classic “Johnny B. Goode,” using the prompt, “1950s rock and roll, rhythm & blues, 12 bar blues, rockabilly, energetic male vocalist, singer guitarist,” along with some of Berry’s lyrics. The suit claims the generator almost perfectly generated the original track’s “Go, Johnny, go, go” chorus.

Screenshot for the Suno AI webpage.
Suno

To be clear, the RIAA isn’t advocating based on the principle that all AI training on copyrighted works is wrong. Instead, it’s saying it’s illegal to do so without licensing and consent, i.e., when the labels (and, likely to a lesser degree, the artists) don’t make any money off of it.

[…]

Source: Record labels sue AI music generators for ‘massive infringement of recorded music’

So they are not only claiming that stuff inspired by stuff a computer listened to is different from stuff inspired by stuff a person listened to, but they are also claiming copyright on something from the 1950’s?!

Microsoft Account to local account conversion guide erased from official Windows 11 guide

Microsoft has been pushing hard for its users to sign into Windows with a Microsoft Account. The newest Windows 11 installer removed the easy bypass to the requirement that you make an account or login with your existing account. If you didn’t install Windows 11 without a Microsoft Account and now want to stop sending the company your data, you can still switch to a local account after the fact. Microsoft even had instructions on how to do this on its official support website – or at least it used to…

Microsoft’s ‘Change from a local account to a Microsoft Account’ guide shows users how they can change their Windows 11 PC login credentials to use their Microsoft Account. The company also supplied instructions on how to ‘Change from a Microsoft account to a local account’ on the same page. However, when we checked the page using the Wayback Machine, the instructions on how to do the latter appeared on June 12, 2024, then disappeared on June 17, 2024. The ‘Change from a Microsoft account to a local account’ instructions yet haven’t returned.

Converting your Windows 11 PC’s login from a Microsoft Account to a local account is a pretty simple process. All you have to do is go to the Settings app, proceed to Accounts > Your info, and select “Sign in with a local account instead.” Follow the instructions on the screen, and you should be good to go.

[…]

It’s apparent that Microsoft really wants users to sign up and use their services, much like how Google and Apple make you create an account so you can make full use of your Android or iDevice. While Windows 11 still lets you use the OS with a local account, these developments show that Microsoft wants this option to be inaccessible, at least for the average consumer.

Source: Microsoft Account to local account conversion guide erased from official Windows 11 guide — instructions redacted earlier this week | Tom’s Hardware

EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat

Automated license plate readers “pose risks to public safety,” argues the EFF, “that may outweigh the crimes they are attempting to address in the first place.” When law enforcement uses automated license plate readers (ALPRs) to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats. The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and “fingerprinting” their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions’ Vigilant ALPRs, including missing encryption and insufficiently protected credentials…

Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems… Because drivers don’t have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology. It’s a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that public safety agencies “are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel,” even though “the potential for harm from external factors is substantial.” That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks “targeting U.S. public safety organizations increased by 142 percent” in 2023.

Yet, the temptation to “collect it all” continues to overshadow the responsibility to “protect it all.” What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs… If there’s one positive thing we can say about the latest Vigilant vulnerability disclosures, it’s that for once a government agency identified and reported the vulnerabilities before they could do damage… The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities…

But a data breach isn’t the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife.

The article concludes that public safety agencies should “collect only the data they need for actual criminal investigations.

“They must never store more data than they adequately protect within their limited resources-or they must keep the public safe from data breaches by not collecting the data at all.”

Source: EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat

Forbes accuses Perplexity AI of bypassing robots.txt web standard to scrape content, Tollbit startup gains publicity by baselessly accusing everyone of doing this too in open letter. Why do we listen to this shit?

[…]

A letter to publishers seen by Reuters on Friday, which does not name the AI companies or the publishers affected, comes amid a public dispute between AI search startup Perplexity and media outlet Forbes involving the same web standard and a broader debate between tech and media firms over the value of content in the age of generative AI.

The business media publisher publicly accused Perplexity of plagiarizing its investigative stories in AI-generated summaries without citing Forbes or asking for its permission.

A Wired investigation published this week found Perplexity likely bypassing efforts to block its web crawler via the Robots Exclusion Protocol, or “robots.txt,” a widely accepted standard meant to determine which parts of a site are allowed to be crawled.

Perplexity declined a Reuters request for comment on the dispute.

The News Media Alliance, a trade group representing more than 2,200 U.S.-based publishers, expressed concern about the impact that ignoring “do not crawl” signals could have on its members.

“Without the ability to opt out of massive scraping, we cannot monetize our valuable content and pay journalists. This could seriously harm our industry,” said Danielle Coffey, president of the group.

Source: Exclusive-Multiple AI companies bypassing web standard to scrape publisher sites, licensing firm says

So the original clickbait headline comes from a content licensing startup scaring content providers up but with no details whatsoever. Why is this even news?!

500,000 Books Have Been Deleted From The Internet Archive’s Lending Library by Greedy Publishers

If you found out that 500,000 books had been removed from your local public library, at the demands of big publishers who refused to let them buy and lend new copies, and were further suing the library for damages, wouldn’t you think that would be a major news story? Wouldn’t you think many people would be up in arms about it?

It’s happening right now with the Internet Archive, and it’s getting almost no attention.

As we’ve discussed at great length, the Internet Archive’s Open Library system is indistinguishable from the economics of how a regular library works. The Archive either purchases physical books or has them donated (just like a physical library). It then lends them out on a one-to-one basis (leaving aside a brief moment where it took down that barrier when basically all libraries were shut down due to pandemic lockdowns), such that when someone “borrows” a digital copy of a book, no one else can borrow that same copy.

And yet, for all of the benefits of such a system in enabling more people to be able to access information, without changing the basic economics of how libraries have always worked, the big publishers all sued the Internet Archive. The publishers won the first round of that lawsuit. And while the court (somewhat surprisingly!) did not order the immediate closure of the Open Library, it did require the Internet Archive to remove any books upon request from publishers (though only if the publishers made those books available as eBooks elsewhere).

As the case has moved into the appeals stage (where we have filed an amicus brief), the Archive has revealed that around 500,000 books have been removed from the open library.

The Archive has put together an open letter to publishers, requesting that they restore access to this knowledge and information — a request that will almost certainly fall on extremely deaf ears.

We purchase and acquire books—yes, physical, paper books—and make them available for one person at a time to check out and read online. This work is important for readers and authors alike, as many younger and low-income readers can only read if books are free to borrow, and many authors’ books will only be discovered or preserved through the work of librarians. We use industry-standard technology to prevent our books from being downloaded and redistributed—the same technology used by corporate publishers.

But the publishers suing our library say we shouldn’t be allowed to lend the books we own. They have forced us to remove more than half a million books from our library, and that’s why we are appealing. 

The Archive also has a huge collection of quotes from people who have been impacted negatively by all of this. Losing access to knowledge is a terrible, terrible thing, driven by publishers who have always hated the fundamental concept of libraries and are very much using this case as an attack on the fundamental principle of lending books.

[…]

And, why? Because copyright and DRM systems allow publishers to massively overcharge for eBooks. This is what’s really the underlying factor here. Libraries in the past could pay the regular price for a book and then lend it out. But with eBook licensing, they are able to charge exorbitant monopoly rents, while artificially limiting how many books libraries can even buy.

I don’t think many people realize the extreme nature of the pricing situation here. As we’ve noted, a book that might cost $29.99 retail can cost $1,300 for an eBook license, and that license may include restrictions, such as having to relicense after a certain number of lends, or saying a library may only be allowed to purchase a single eBook license at a time.

The ones who changed the way libraries work is not the Internet Archive. It’s the publishers. They’re abusing copyright and DRM to fundamentally kill the very concept of a library, and this lawsuit is a part of that strategy.

Source: 500,000 Books Have Been Deleted From The Internet Archive’s Lending Library | Techdirt

EU delays decision over continuous spying on all your devices *cough* scanning encrypted messages for kiddie porn

European Union officials have delayed talks over proposed legislation that could lead to messaging services having to scan photos and links to detect possible child sexual abuse material (CSAM). Were the proposal to become law, it may require the likes of WhatsApp, Messenger and Signal to scan all images that users upload — which would essentially force them to break encryption.

For the measure to pass, it would need to have the backing of at least 15 of the member states representing at least 65 percent of the bloc’s entire population. However, countries including Germany, Austria, Poland, the Netherlands and the Czech Republic were expected to abstain from the vote or oppose the plan due to cybersecurity and privacy concerns, Politico reports. If EU members come to an agreement on a joint position, they’ll have to hash out a final version of the law with the European Commission and European Parliament.

The legislation was first proposed in 2022 and it could result in messaging services having to scan all images and links with the aim of detecting CSAM and communications between minors and potential offenders. Under the proposal, users would be informed about the link and image scans in services’ terms and conditions. If they refused, they would be blocked from sharing links and images on those platforms. However, as Politico notes, the draft proposal includes an exemption for “accounts used by the State for national security purposes.”

[…]

Patrick Breyer, a digital rights activist who was a member of the previous European Parliament before this month’s elections, has argued that proponents of the so-called “chat control” plan aimed to take advantage of a power vacuum before the next parliament is constituted. Breyer says that the delay of the vote, prompted in part by campaigners, “should be celebrated,” but warned that “surveillance extremists among the EU governments” could again attempt to advance chat control in the coming days.

Other critics and privacy advocates have slammed the proposal. Signal president Meredith Whittaker said in a statement that “mass scanning of private communications fundamentally undermines encryption,” while Edward Snowden described it as a “terrifying mass surveillance measure.”

[…]

The EU is not the only entity to attempt such a move. In 2021, Apple revealed a plan to scan iCloud Photos for known CSAM. However, it scrapped that controversial effort following criticism from the likes of customers, advocacy groups and researchers.

Source: EU delays decision over scanning encrypted messages for CSAM

Watch out very very carefully  as soon as people start taking your freedoms in the name of “protecting children”.

FedEx’s Secretive Police Force Is Helping Cops Build An AI Car Surveillance Network

[…] Forbes has learned the shipping and business services company is using AI tools made by Flock Safety, a $4 billion car surveillance startup, to monitor its distribution and cargo facilities across the United States. As part of the deal, FedEx is providing its Flock surveillance feeds to law enforcement, an arrangement that Flock has with at least four multi-billion dollar private companies. But publicly available documents reveal that some local police departments are also sharing their Flock feeds with FedEx — a rare instance of a private company availing itself of a police surveillance apparatus.

To civil rights activists, such close collaboration has the potential to dramatically expand Flock’s car surveillance network, which already spans 4,000 cities across over 40 states and some 40,000 cameras that track vehicles by license plate, make, model, color and other identifying characteristics, like dents or bumper stickers. Lisa Femia, staff attorney at the Electronic Frontier Foundation, said because private entities aren’t subject to the same transparency laws as police, this sort of arrangement could “[leave] the public in the dark, while at the same time expanding a sort of mass surveillance network.”

[…]

It’s unclear just how widely law enforcement is sharing Flock data with FedEx. According to publicly available lists of data sharing partners, two police departments have granted the FedEx Air Carrier Police Department access to their Flock cameras: Shelby County Sheriff’s Office in Tennessee and Pittsboro Police Department in Indiana.

Shelby County Sheriff’s Office public information officer John Morris confirmed the collaboration. “We share reads from our Flock license plate readers with FedEx in the same manner we share the data with other law enforcement agencies, locally, regionally, and nationally,” he told Forbes via email.

[…]

FedEx is also sharing its Flock camera feeds with other police departments, including the Greenwood Police Department in Indiana, according to Matthew Fillenwarth, assistant chief at the agency. Morris at Shelby County Sheriff’s Office confirmed his department had access to FedEx’s Flock feeds too. Memphis Police Department said it received surveillance camera feeds from FedEx through its Connect Memphis system

[…]

Flock, which was founded in 2017, has raised more than $482 million in venture capital investment from the likes of Andreessen Horowitz, helping it expand its vast network of cameras across America through both public police department contracts and through more secretive agreements with private businesses.

Forbes has now uncovered at least four corporate giants using Flock, none of which had publicly disclosed contracts with the surveillance startup. As Forbes previously reported, $50 billion-valued Simon Property, the country’s biggest mall owner, and home improvement giant Lowe’s, are two of the biggest clients. Like FedEx, Simon Property also has provided its mall feeds to local cops.

[…]

Kaiser Permanente, the largest health insurance company in America, has shared Flock data with the Northern California Regional Intelligence Center, an intelligence hub that provides support to local and federal police investigating major crimes across California’s west coast

[…]

Flock’s senior vice president of policy and communications Joshua Thomas declined to comment on private customers. “Flock’s technology and tools help our customers bolster their public safety efforts by helping to deter and solve crime efficiently and objectively,” Thomas said. “Objective video evidence is crucial to solving crime and we support our customers sharing that evidence with those that they are legally allowed to do so with.”

He said Flock was helping to solve “thousands of crimes nationwide” and is working toward its “goal of leveraging technology to eliminate crime.” Forbes previously found that Flock’s marketing data had exaggerated its impact on crime rates and that the company had itself likely broken the law across various states by installing cameras without the right permits.

Source: FedEx’s Secretive Police Force Is Helping Cops Build An AI Car Surveillance Network

Signal, MEPs urge EU Council to drop law that puts a spy on everyone’s devices

On Thursday, the EU Council is scheduled to vote on a legislative proposal that would attempt to protect children online by disallowing confidential communication.

The vote had been set for Wednesday but got pushed back [PDF].

Known to detractors as Chat Control, the proposal seeks to prevent the online dissemination of child sexual abuse material (CSAM) by requiring internet service providers to scan digital communication – private chats, emails, social media messages, and photos – for unlawful content.

The proposal [PDF], recognizing the difficulty of explicitly outlawing encryption, calls for “client-side scanning” or “upload moderation” – analyzing content on people’s mobile devices and computers for certain wrongdoing before it gets encrypted and transmitted.

The idea is that algorithms running locally on people’s devices will reliably recognize CSAM (and whatever else is deemed sufficiently awful), block it, and/or report it to authorities. This act of automatically policing and reporting people’s stuff before it’s even had a chance to be securely transferred rather undermines the point of encryption in the first place.

We’ve been here before. Apple announced plans to implement a client-side scanning scheme back in August 2021, only to face withering criticism from the security community and civil society groups. In late 2021, the iGiant essentially abandoned the idea.

Europe’s planned “regulation laying down rules to prevent and combat child sexual abuse” is not the only legislative proposal that contemplates client-side scanning as a way to front-run the application of encryption. The US Earn-It Act imagines something similar.

In the UK, the Online Safety Act of 2023 includes a content scanning requirement, though with the government’s acknowledgement that enforcement isn’t presently feasible. While it does allow telecoms regulator Ofcom to require online platforms to adopt an “accredited technology” to identify unlawful content, there is currently no such technology and it’s unclear how accreditation would work.

With the EU proposal vote approaching, opponents of the plan have renewed their calls to shelve the pre-crime surveillance regime.

In an open letter [PDF] on Monday, Meredith Whittaker, CEO of Signal, which threatened to withdraw its app from the UK if the Online Safety Act disallowed encryption, reiterated why the EU client-side scanning plan is unworkable and dangerous.

“There is no way to implement such proposals in the context of end-to-end encrypted communications without fundamentally undermining encryption and creating a dangerous vulnerability in core infrastructure that would have global implications well beyond Europe,” wrote Whittaker.

European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label

“Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games.

“They’ve come back to the table with the same idea under a new label. Instead of using the previous term ‘client-side scanning,’ they’ve rebranded and are now calling it ‘upload moderation.’

“Some are claiming that ‘upload moderation’ does not undermine encryption because it happens before your message or video is encrypted. This is untrue.”

The Internet Architecture Board, part of the Internet Engineering Task Force, offered a similar assessment of client-side scanning in December.

Encrypted comms service Threema published its open variation on this theme on Monday, arguing that mass surveillance is incompatible with democracy, is ineffective, and undermines data security.

“Should it pass, the consequences would be devastating: Under the pretext of child protection, EU citizens would no longer be able to communicate in a safe and private manner on the internet,” the biz wrote.

EU citizens would no longer be able to communicate in a safe and private manner on the internet

“The European market’s location advantage would suffer a massive hit due to a substantial decrease in data security. And EU professionals like lawyers, journalists, and physicians could no longer uphold their duty to confidentiality online. All while children wouldn’t be better protected in the least bit.”

Threema said if it isn’t allowed to offer encryption, it will leave the EU.

And on Tuesday, 37 Members of Parliament signed an open letter to the Council of Europe urging legislators to reject Chat Control.

“We explicitly warn that the obligation to systematically scan encrypted communication, whether called ‘upload-moderation’ or ‘client-side scanning,’ would not only break secure end-to-end encryption, but will to a high probability also not withstand the case law of the European Court of Justice,” the MEPs said. “Rather, such an attack would be in complete contrast to the European commitment to secure communication and digital privacy, as well as human rights in the digital space.” ®

Source: Signal, MEPs urge EU Council to drop encryption-eroding law • The Register

Hey, EU, stop spying on us! We are supposed to be the free ones here.

If Creepy Spyware Clearview AI scanned your face, you may get equity in the company

Controversial facial recognition company Clearview AI has agreed to an unusual settlement to a class action lawsuit, The New York Times reports. Rather than paying cash, the company would provide a 23 percent stake in its company to any Americans in its database. Without the settlement, Clearview could go bankrupt, according to court documents.

If you live in the US and have ever posted a photo of yourself publicly online, you may be part of the class action. The settlement could amount to at least $50 million according to court documents, It still must be approved by a federal judge.

Clearview AI, which counts billionaire Peter Thiel as a backer, says it has over 30 billion images in its database. Those can be accessed and cross-referenced by thousands of law enforcement departments including the US FBI and Department of Homeland Security.

Shortly after its identity was outed, Clearview was hit with lawsuits in Illinois, California, Virginia, New York and elsewhere, which were all brought together as a class action suit in a federal Chicago court. The cost of the litigation was said to be draining the company’s reserves, forcing it to seek a creative way to settle the suit.

The relatively small sum divided by the large number of users likely to be in the database means you won’t be receiving a windfall. In any case, it would only happen if the company goes public or is acquired, according to the report. Once that occurs, lawyers would take up to 39 percent of the settlement, meaning the final amount could be reduced to about 30 million. If a third of Americans were in the database (about 110 million), each would get about 27 cents.

That does beg the question of whether it would be worth just over a quarter to see one of the creepiest companies of all time to go bankrupt. To cite a small litany of the actions taken against it (on top of the US class action):

  • It was sued by the ACLU in 2020 (Clearview agreed to permanently halt sales of its biometric database to private companies in the US as part of the settlement.
  • Italy slapped a €20 million fine on the company in 2022 and banned it from using images of Italians in its database
  • Privacy groups in Europe filed complaints against it for allegedly breaking privacy laws (2021)
  • UK’s privacy watchdog slapped it with a £7.55 million fine and ordered it to delete data from any UK resident
  • The LAPD banned the use of its software in 2020
  • Earlier this year the EU barred untargeted scraping of faces from the web, effectively blocking Clearview’s business model in Europe

Source: If Clearview AI scanned your face, you may get equity in the company

Sonos draws more customer anger — this time for its privacy policy. Now they will sell your customer data, apparently

It’s been a rocky couple of months for Sonos — so much so that CEO Patrick Spence now has a canned autoreply for customers emailing him to vent about the redesigned app. But as the company works to right the ship, restore trust, and get the new Sonos Ace headphones off to a strong start, it finds itself in the middle of yet another controversy.

As highlighted by repair technician and consumer privacy advocate Louis Rossmann, Sonos has made a significant change to its privacy policy, at least in the United States, with the removal of one key line. The updated policy no longer contains a sentence that previously said, “Sonos does not and will not sell personal information about our customers.” That pledge is still present in other countries, but it’s nowhere to be found in the updated US policy, which went into effect earlier this month.

Now, some customers, already feeling burned by the new Sonos app’s unsteady performance, are sounding off about what they view as another poor decision from the company’s leadership. For them, it’s been one unforced error after another from a brand they once recommended without hesitation.

[…]

As part of its reworked app platform, Sonos rolled out web-based access for all customer systems — giving the cloud an even bigger role in the company’s architecture. Unfortunately, the web app currently lacks any kind of two-factor authentication, which has also irked users; all it takes is an email address and password to remotely control Sonos devices.

[…]

Source: Sonos draws more customer anger — this time for its privacy policy – The Verge

If I had an “idiocy” tag, I would have used it for these bozo’s.

Mozilla caves to public and restores Firefox add-ons banned in Russia that circumvent Russian censorship

Mozilla has reinstated certain add-ons for Firefox that earlier this week had been banned in Russia by the Kremlin.

The browser extensions, which are hosted on the Mozilla store, were made unavailable in the Land of Putin on or around June 8 after a request by the Russian government and its internet censorship agency, Roskomnadzor.

Among those extensions were three pieces of code that were explicitly designed to circumvent state censorship – including a VPN and Censor Tracker, a multi-purpose add-on that allowed users to see what websites shared user data, and a tool to access Tor websites.

The day the ban went into effect, Roskomsvoboda – the developer of Censor Tracker – took to the official Mozilla forums and asked why his extension was suddenly banned in Russia with no warning.

[…]

“In alignment with our commitment to an open and accessible internet, Mozilla will reinstate previously restricted listings in Russia,” the group declared. “Our initial decision to temporarily restrict these listings was made while we considered the regulatory environment in Russia and the potential risk to our community and staff.

“We remain committed to supporting our users in Russia and worldwide and will continue to advocate for an open and accessible internet for all.”

[…]

Source: Mozilla restores Firefox add-ons banned in Russia • The Register

(see also: Mozilla Firefox Blocks Add-Ons which Circumvent Censorship in Russia)