Texas AG Latest To Sue GM For Covertly Selling Driver Data To Insurance Companies

Last year Mozilla released a report showcasing how the auto industry has some of the worst privacy practices of any tech industry in America (no small feat). Massive amounts of driver behavior is collected by your car, and even more is hoovered up from your smartphone every time you connect. This data isn’t secured, often isn’t encrypted, and is sold to a long list of dodgy, unregulated middlemen.

Last March the New York Times revealed that automakers like GM routinely sell access to driver behavior data to insurance companies, which then use that data to justify jacking up your rates. The practice isn’t clearly disclosed to consumers, and has resulted in 11 federal lawsuits in less than a month.

Now Texas AG Ken Paxton has belatedly joined the fun, filing suit (press release, complaint) in the state district court of Montgomery County against GM for “false, deceptive, and misleading business practices”:

“Companies are using invasive technology to violate the rights of our citizens in unthinkable ways. Millions of American drivers wanted to buy a car, not a comprehensive surveillance system that unlawfully records information about every drive they take and sells their data to any company willing to pay for it.”

Paxton notes that GM’s tracking impacted 1.8 million Texans and 14 million vehicles, few if any of whom understood they were signing up to be spied on by their vehicle. This is, amazingly enough, the first state lawsuit against an automaker for privacy violations, according to Politico.

The sales pitch for this kind of tracking and sales is that good drivers will be rewarded for more careful driving. But as publicly-traded companies, everybody in this chain — from insurance companies to automakers — are utterly financially desensitized from giving anybody a consistent break for good behavior. That’s just not how it’s going to work. Everybody pays more and more. Always.

But GM and other automakers’ primary problem is they weren’t telling consumers this kind of tracking was even happening in any clear, direct way. Usually it’s buried deep in an unread end user agreement for roadside assistant apps and related services. Those services usually involve a free trial, but the user agreement to data collection sticks around.

[…]

Source: Texas AG Latest To Sue GM For Covertly Selling Driver Data To Insurance Companies | Techdirt

UK Once Again Denies A Passport Over Applicant’s Name Due To Intellectual Property Concerns – again

I can’t believe this, but it happened again. Almost exactly a decade ago, Tim Cushing wrote about a bonkers story out of the UK in which a passport applicant who’s middle name was “Skywalker” was denied the passport due to purported trademark or copyright concerns. The question that ought to immediately leap to mind should be: wait, nothing about a name or its appearance on a passport amounts to either creative expression being copied, nor use in commerce, meaning that neither copyright nor trademark law ought to apply in the slightest.

And you would have thought that coming out of that whole episode, proper guidance would have been given to the UK’s passport office so that this kind of stupidity doesn’t happen again. Unfortunately, it did happen again. A UK woman attempted to get a passport for her daughter, who she named Khaleesi, only to have it refused over the trademark for the Game of Thrones character that held the same fictional title.

Lucy, 39, from Swindon in Wiltshire, said the Passport Office initially refused the application for Khaleesi, six.

Officials said they were unable to issue a passport unless Warner Brothers gave permission because it owned the name’s trademark. But the authority has since apologised for the error.

“I was absolutely devastated, we were so looking forward to our first holiday together,” Lucy said.

While any intellectual property concerns over a passport are absolutely silly, I would argue that trademark law makes even less sense here than copyright would. Again, trademark law is designed specifically to protect the public from being confused as to the source of a good or service in commerce. There is no good or service nor commerce here. Lucy would simply like to take her own child across national borders. That’s it. Lucy had to consult with an attorney due to this insanity, which didn’t initially yield the proper result.

After seeking legal advice, her solicitors discovered that while there is a trademark for Game of Thrones, it is for goods and services – but not for a person’s name.

“That information was sent to the Passport Office who said I would need a letter from Warner Brothers to confirm my daughter is able to use that name,” she said.

This amounts to a restriction on the rights and freedoms of a child in a free country as a result of the choice their parent’s made about their name. Whatever your thoughts on IP laws in general, that simply cannot be the aim of literally any of them.

Now, once the media got a hold of all of this, the Passport Office eventually relented, said it made an error in denying the passport, and has put the application through. But even the government’s explanation doesn’t fully make sense.

Official explained there had been a misunderstanding and the guidance staff had originally given applies only to people changing their names.

“He advised me that they should be able to process my daughter’s passport now, ” she said.

Why would the changing of a name be any different? My name is my name, not a creative expression, nor a use in commerce. If I elect to change my name from “Timothy Geigner” to “Timothy Mickey Mouse Geigner”, none of that equates to an infringement of Disney’s rights, copyright nor trademark. It’s just my name. It would only be if I attempted to use my new name in commerce or as part of an expression that I might run afoul of either trademark or copyright law.

What this really is is the pervasive cancer that is ownership culture. It’s only with ownership culture that you get a passport official somehow thinking that Warner Bros. production of a fantasy show means a six year old can’t get a passport.

Source: UK Once Again Denies A Passport Over Applicant’s Name Due To Intellectual Property Concerns | Techdirt

New U.N. Cybercrime Treaty Could Threaten Human Rights

The United Nations approved its first international cybercrime treaty yesterday. The effort succeeded despite opposition from tech companies and human rights groups, who warn that the agreement will permit countries to expand invasive electronic surveillance in the name of criminal investigations. Experts from these organizations say that the treaty undermines the global human rights of freedom of speech and expression because it contains clauses that countries could interpret to internationally prosecute any perceived crime that takes place on a computer system.

[…]

among the watchdog groups that monitored the meeting closely, the tone was funereal. “The U.N. cybercrime convention is a blank check for surveillance abuses,” says Katitza Rodriguez, the Electronic Frontier Foundation’s (EFF’s) policy director for global privacy. “It can and will be wielded as a tool for systemic rights violations.”

In the coming weeks, the treaty will head to a vote among the General Assembly’s 193 member states. If it’s accepted by a majority there, the treaty will move to the ratification process, in which individual country governments must sign on.

The treaty, called the Comprehensive International Convention on Countering the Use of Information and Communications Technologies for Criminal Purposes, was first devised in 2019, with debates to determine its substance beginning in 2021. It is intended to provide a global legal framework to prevent and respond to cybercrimes.

[…]

experts have expressed that the newly adopted treaty lacks such safeguards for a free Internet. A major concern is that the treaty could be applied to all crimes as long as they involve information and communication technology (ICT) systems. HRW has documented the prosecution of LGBTQ+ people and others who expressed themselves online. This treaty could require countries’ governments to cooperate with other nations that have outlawed LGBTQ+ conduct or digital forms of political protest, for instance.

“This expansive definition effectively means that when governments pass domestic laws that criminalize a broad range of conducts, if it’s committed through an ICT system, they can point to this treaty to justify the enforcement of repressive laws,” said HRW executive director Tirana Hassan in a news briefing late last month.

[…]

“The treaty allows for cross-border surveillance and cooperation to gather evidence for serious crimes, effectively transforming it into a global surveillance network,” Rodriguez says. “This poses a significant risk of cross-border human rights abuses and transnational repression.”

[…]

Source: New U.N. Cybercrime Treaty Could Threaten Human Rights | Scientific American

For a more complete look at the threats presented by this treaty, also see: UN Cybercrime Treaty does not define cybercrime, allows any definition and forces all signatories to secretly surveil their own population on request by any other signatory (think totalitarian states spying on people in democracies with no recourse)

Suno & Udio To RIAA: Your Music Is Copyrighted, You Can’t Copyright Styles

AI music generators Suno and Udio responded to the lawsuits filed by the major recording labels, arguing that their platforms are tools for making new, original music that “didn’t and often couldn’t previously exist.”

“Those genres and styles — the recognizable sounds of opera, or jazz, or rap music — are not something that anyone owns,” the companies said. “Our intellectual property laws have always been carefully calibrated to avoid allowing anyone to monopolize a form of artistic expression, whether a sonnet or a pop song. IP rights can attach to a particular recorded rendition of a song in one of those genres or styles. But not to the genre or style itself.” TorrentFreak reports: “[The labels] frame their concern as one about ‘copies’ of their recordings made in the process of developing the technology — that is, copies never heard or seen by anyone, made solely to analyze the sonic and stylistic patterns of the universe of pre-existing musical expression. But what the major record labels really don’t want is competition.” The labels’ position is that any competition must be legal, and the AI companies state quite clearly that the law permits the use of copyrighted works in these circumstances. Suno and Udio also make it clear that snippets of copyrighted music aren’t stored as a library of pre-existing content in the neural networks of their AI models, “outputting a collage of ‘samples’ stitched together from existing recordings” when prompted by users.

“[The neural networks were] constructed by showing the program tens of millions of instances of different kinds of recordings,” Suno explains. “From analyzing their constitutive elements, the model derived a staggeringly complex collection of statistical insights about the auditory characteristics of those recordings — what types of sounds tend to appear in which kinds of music; what the shape of a pop song tends to look like; how the drum beat typically varies from country to rock to hip-hop; what the guitar tone tends to sound like in those different genres; and so on.” These models are vast stores, not of copyrighted music, the defendants say, but information about what musical styles consist of, and it’s from that information new music is made.

Most copyright lawsuits in the music industry are about reproduction and public distribution of identified copyright works, but that’s certainly not the case here. “The Complaint explicitly disavows any contention that any output ever generated by Udio has infringed their rights. While it includes a variety of examples of outputs that allegedly resemble certain pre-existing songs, the Complaint goes out of its way to say that it is not alleging that those outputs constitute actionable copyright infringement.” With Udio declaring that, as a matter of law, “that key point makes all the difference,” Suno’s conclusion is served raw. “That concession will ultimately prove fatal to Plaintiffs’ claims. It is fair use under copyright law to make a copy of a protected work as part of a back-end technological process, invisible to the public, in the service of creating an ultimately non-infringing new product.” Noting that Congress enacted the first copyright law in 1791, Suno says that in the 233 years since, not a single case has ever reached a contrary conclusion.

In addition to addressing allegations unique to their individual cases, the AI companies accuse the labels of various types of anti-competitive behavior. Imposing conditions to prevent streaming services obtaining licensed music from smaller labels at lower rates, seeking to impose a “no AI” policy on licensees, to claims that they “may have responded to outreach from potential commercial counterparties by engaging in one or more concerted refusals to deal.” The defendants say this type of behavior is fueled by the labels’ dominant control of copyrighted works and by extension, the overall market. Here, however, ownership of copyrighted music is trumped by the existence and knowledge of musical styles, over which nobody can claim ownership or seek to control. “No one owns musical styles. Developing a tool to empower many more people to create music, by scrupulously analyzing what the building blocks of different styles consist of, is a quintessential fair use under longstanding and unbroken copyright doctrine. “Plaintiffs’ contrary vision is fundamentally inconsistent with the law and its underlying values.”
You can read Suno and Udio’s answers to the RIAA’s lawsuits here (PDF) and here (PDF).

Source: Suno & Udio To RIAA: Your Music Is Copyrighted, You Can’t Copyright Styles

Chrome Web Store warns end is coming for uBlock Origin

[…] With the stable release of Chrome 127 on July 23, 2024, the full spectrum of Chrome users could see the warning. One user of the content-blocking add-on filed a GitHub Issue about the notification.

“This extension may soon no longer be supported because it doesn’t follow best practices for Chrome extensions,” the Chrome Web Store (CWS) notification banner explained.

But Google is being too cautious in its language. uBlock Origin (uBO) will stop working entirely when Google Chrome drops support for Manifest v2 – which uBlock Origin and other extensions rely on to do their thing. When Manifest v2 is no longer supported by Chrome, uBlock Origin won’t work at all – that’s what Google should be telling users.

Raymond Hill, the creator and maintainer of uBO, has made it clear that he will not be trying to adapt uBO to Google’s Manifest v3 – the extension architecture that is replacing v2.

“You will have to find an alternative to uBO before Google Chrome disables it for good,” he explained in a list of FAQs for uBlock Origin Lite – a content-blocking extension that functions on the upcoming Manifest v3 system but lacks the ability to create custom filters.

uBlock Origin Lite, he explained, is “not meant as a [Manifest v3]-compliant version of uBO, it’s meant as a reliable Lite version of uBO, suitable for those who used uBO in an install-and-forget manner.”

This is a nuanced statement. He’s not saying that if you move from uBO to uBlock Origin Lite all will be well and exactly the same – just that uBlock Origin Lite works on Manifest v3, so it will continue working after the v2 purge.

This nuance is needed because Manifest v2 provided uBlock Origin and other extensions deep access to sites and pages being visited by the user. It allowed adverts and other stuff to be filtered out as desired, whereas v3 pares back that functionality.

While it’s difficult to generalize about how the experience of uBO under Manifest v2 and uBOL under Manifest v3 will differ, Hill expects uBOL “will be less effective at dealing with” websites that detect and block content blockers, and at “minimizing website breakage” when stuff is filtered out, because existing uBO filters can’t be converted to declarative rules.

[…]

Source: Chrome Web Store warns end is coming for uBlock Origin • The Register

Samsung starts blocking sideloading, so Epic Games pulls Fortnite from the Galaxy Store

After it was discovered that Samsung would begin blocking any attempt to sideload apps Epic Games has made the decision to remove Fortnite, among other titles, from the Galaxy Store.

When the Galaxy Z Fold 6 began to land in the hands of users, the loaded version of One UI touted a brand-new attempt to block unverified apps from being sideloaded. Samsung’s One UI 6.1.1 asks if the user wants to turn on the “Auto Blocker,” a function that will block not only apps from unverified sources but also commands or software updates via USB cable.

Related: Samsung Galaxy phones now stop you from sideloading Android apps by default

Epic Games views this as poor behavior on Samsung’s part, citing it as one reason the company is pulling Fortnite from the Galaxy Store in One UI. A blog post notes that the decision was also made because of “ongoing Google proposals to Samsung to restrain competition in the market for Android app distribution.”

[…]

Source: Epic Games pulls Fortnite from the Galaxy Store

Come on Samsung,  blocking sideloading and USB? Really, one of the advantages of Android is that it is a (more) open system.

US Congress Wants To Let Private Companies Own The Law – set standards you must comply with but can’t actually find or see easily

It sounds absolutely batty that there is a strong, bipartisan push to lock up aspects of our law behind copyright. But it’s happening. Even worse, the push is on to include this effort to lock up the law in the “must pass” National Defense Authorization Act (NDAA). This is the bill that Congress lights up like a Christmas tree with the various bills they know they can’t pass normally, every year.

And this year, they’re pushing the Pro Codes Act, a dangerous bill to lock up the law that has bipartisan support.

[…]

There are lots of standards out there, often developed by industry groups. These standards can be on all sorts of subjects, such as building codes or consumer safety or indicators for hazardous materials. The list goes on and on and on. Indeed, the National Institute of Standards and Technology has a database of over 27,000 such standards that are “included by reference” into law.

This is where things get wonky. Since many of these standards are put together by private organizations (companies, standards bodies, whatever), some of them could qualify for copyright. But, then, lawmakers will often require certain products and services to meet those standards. That is, the laws will “reference” those standards (for example, how to have a building be built in a safe or non-polluting manner).

Many people, myself included, believe that the law must be public. How can the rule of law make any sense at all if the public cannot freely access and read the law? Thus, we believe that when a standard gets “incorporated by reference” into the law, it should become public domain, for the simple fact that the law itself must be public domain.

[…]

Two years ago, there was a pretty big victory, noting that his publishing of standards that are “incorporated by reference” is fair use.

But industry standards bodies hate this, because often a large part of their own revenue stream comes from selling access to the standards they create, including those referenced by laws.

So they lobbied Congress to push this Pro Codes Act, which explicitly says that technical standards incorporated by reference retain copyright. To try to stave off criticism (and to mischaracterize the bill publicly), the law says that standards bodies retain the copyright if the standards body makes the standard available on a free publicly accessible online source.

[…]

They added this last part to head off criticism that the law is “locked up.” They say things like “see, under this law, the law has to be freely available online.”

But that’s missing the point. It still means that the law itself is only available from one source, in one format. And while it has to be “publicly accessible online at no monetary cost,” that does not mean that it has to be publicly accessible in an easy or useful manner. It does not mean that there won’t be limitations on access or usage.

It is locking up the law.

But, because the law says that those standards must be released online free of cost, it allows the supporters of this law, like Issa, to falsely portray the law as “enhancing public access” to the laws.

That’s a lie.

[…]

t flies in the face of the very fundamental concept that “no one can own the law,” as the Supreme Court itself recently said. And to try and shove it into a must pass bill about funding the military is just ridiculously cynical, while demonstrating that its backers know it can’t pass through regular process.

Instead, this is an attempt by Congress to say, yes, some companies do get to own the law, so long as they put up a limited, difficult to use website by which you can see parts of the law.

Library groups and civil society groups are pushing back on this (disclaimer: we signed onto this letter). Please add your voice and tell Congress not to lock up the law.

Source: Congress Wants To Let Private Companies Own The Law | Techdirt

FTC asks 8 big names to explain surveillance pricing tech

The US Federal Trade Commission (FTC) has launched an investigation into “surveillance pricing,” a phenomenon likely familiar to anyone who’s had to buy something in an incognito browser window to avoid paying a premium.

Surveillance pricing, according to the FTC, is the use of algorithms, AI, and other technologies – most crucially combined with personal information about shoppers like location, demographics, credit, the computer used, and browsing/shopping history – “to categorize individuals and set a targeted price for a product or service.”

In other words, the regulator is concerned about the use of software to artificially push up prices for people based on their perceived circumstances, something that incognito mode can counter by more or less cloaking your online identity.

[…]

But don’t mistake this for legal action – at this point it’s all about “helping the FTC better understand the opaque market for [surveillance pricing] products by third-party intermediaries,” the government watchdog said.

“Firms that harvest Americans’ personal data can put people’s privacy at risk,” FTC boss Lina Khan opined. “Now firms could be exploiting this vast trove of personal information to charge people higher prices.”

It’s not exactly a secret that sellers manipulate online prices, or that consumers know about it – recommendations to shop online in an incognito browser window are plentiful and go back years.

In this case, the FTC wants to know more about how Mastercard, JPMorgan Chase, Accenture and McKinsey & Co are offering surveillance pricing products. It also wants the same information from some names you may not have heard of, like Revionics, which offers surveillance pricing services to companies like The Home Depot and Tractor Supply; Task Software, which counts McDonald’s and Starbucks among its customers; PROS, which supports Nestle, DigiKey and others; and Bloomreach, which provides similar services like Williams Sonoma, Total Wine, and Virgin Experience Days.

The FTC wants to probe what types of surveillance pricing products exist, the services they offer, how they’re collecting customer data and where it’s coming from, information about who they offered services to, and what sort of impacts these may have on consumers and the prices they pay.

[…]

Source: FTC asks 8 big names to explain surveillance pricing tech • The Register

Google’s reCAPTCHAv2 is just labor exploitation, boffins say

Google promotes its reCAPTCHA service as a security mechanism for websites, but researchers affiliated with the University of California, Irvine, argue it’s harvesting information while extracting human labor worth billions.

The term CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart,” and, as Google explains, it refers to a challenge-response authentication scheme that presents people with a puzzle or question that a computer cannot solve.

[…]

The utility of reCAPTCHA challenges appears to be significantly diminished in an era when AI models can answer CAPTCHA questions almost as well as humans.

Show me the money

UC Irvine academics contend CAPTCHAs should be binned.

In a paper [PDF] titled “Dazed & Confused: A Large-Scale Real-World User Study of reCAPTCHAv2,” authors Andrew Searles, Renascence Tarafder Prapty, and Gene Tsudik argue that the service should be abandoned because it’s disliked by users, costly in terms of time and datacenter resources, and vulnerable to bots – contrary to its intended purpose.

“I believe reCAPTCHA’s true purpose is to harvest user information and labor from websites,” asserted Andrew Searles, who just completed his PhD and was the paper’s lead author, in an email to The Register.

“If you believe that reCAPTCHA is securing your website, you have been deceived. Additionally, this false sense of security has come with an immense cost of human time and privacy.”

The paper, released in November 2023, notes that even back in 2016 researchers were able to defeat reCAPTCHA v2 image challenges 70 percent of the time. The reCAPTCHA v2 checkbox challenge is even more vulnerable – the researchers claim it can be defeated 100 percent of the time.

reCAPTCHA v3 has fared no better. In 2019, researchers devised a reinforcement learning attack that breaks reCAPTCHAv3’s behavior-based challenges 97 percent of the time.

[…]

The authors’ research findings are based on a study of users conducted over 13 months in 2022 and 2023. Some 9,141 reCAPTCHAv2 sessions were captured from unwitting participants and analyzed, in conjunction with a survey completed by 108 individuals.

Respondents gave the reCAPTCHA v2 checkbox puzzle 78.51 out of 100 on the System Usability Scale, while the image puzzle rated only 58.90. “Results demonstrate that 40 percent of participants found the image version to be annoying (or very annoying), while <10 percent found the checkbox version annoying,” the paper explains.

But when examined in aggregate, reCAPTCHA interactions impose a significant cost – some of which Google captures.

“In terms of cost, we estimate that – during over 13 years of its deployment – 819 million hours of human time has been spent on reCAPTCHA, which corresponds to at least $6.1 billion USD in wages,” the authors state in their paper.

“Traffic resulting from reCAPTCHA consumed 134 petabytes of bandwidth, which translates into about 7.5 million kWhs of energy, corresponding to 7.5 million pounds of CO2. In addition, Google has potentially profited $888 billion from cookies [created by reCAPTCHA sessions] and $8.75–32.3 billion per each sale of their total labeled data set.”

Asked whether the costs Google shifts to reCAPTCHA users in the form of time and effort are unreasonable or exploitive, Searles pointed to the original white paper on CAPTCHAs by Luis von Ahn, Manuel Blum, and John Langford – which includes a section titled “Stealing cycles from humans.”

[…]

As the paper points out, image-labeling challenges have been around since 2004 and by 2010 there were attacks that could beat them 100 percent of the time. Despite this, Google introduced reCAPTCHA v2 with a fall-back image recognition security challenge that had been proven to be insecure four years earlier.

This makes no sense, the authors argue, from a security perspective. But it does make sense if the goal is obtaining image labeling data – the results of users identifying CAPTCHA images – which Google happens to sell as a cloud service.

“The conclusion can be extended that the true purpose of reCAPTCHA v2 is a free image-labeling labor and tracking cookie farm for advertising and data profit masquerading as a security service,” the paper declares.

[…]

Source: Google’s reCAPTCHAv2 is just labor exploitation, boffins say • The Register

UN Cybercrime Treaty does not define cybercrime, allows any definition and forces all signatories to secretly surveil their own population on request by any other signatory (think totalitarian states spying on people in democracies with no recourse)

[…] EFF colleague, Katitza Rodriguez, about the Cybercrime Treaty, which is about to pass, and which is, to put it mildly, terrifying:

https://www.eff.org/deeplinks/2024/07/un-cybercrime-draft-convention-dangerously-expands-state-surveillance-powers

Look, cybercrime is a real thing, from pig butchering to ransomware, and there’s real, global harms that can be attributed to it. Cybercrime is transnational, making it hard for cops in any one jurisdiction to handle it. So there’s a reason to think about formal international standards for fighting cybercrime.

But that’s not what’s in the Cybercrime Treaty.

Here’s a quick sketch of the significant defects in the Cybercrime Treaty.

The treaty has an extremely loose definition of cybercrime, and that looseness is deliberate. In authoritarian states like China and Russia (whose delegations are the driving force behind this treaty), “cybercrime” has come to mean “anything the government disfavors, if you do it with a computer.” “Cybercrime” can mean online criticism of the government, or professions of religious belief, or material supporting LGBTQ rights.

Nations that sign up to the Cybercrime Treaty will be obliged to help other nations fight “cybercrime” – however those nations define it. They’ll be required to provide surveillance data – for example, by forcing online services within their borders to cough up their users’ private data, or even to pressure employees to install back-doors in their systems for ongoing monitoring.

These obligations to aid in surveillance are mandatory, but much of the Cybercrime Treaty is optional. What’s optional? The human rights safeguards. Member states “should” or “may” create standards for legality, necessity, proportionality, non-discrimination, and legitimate purpose. But even if they do, the treaty can oblige them to assist in surveillance orders that originate with other states that decided not to create these standards.

When that happens, the citizens of the affected states may never find out about it. There are eight articles in the treaty that establish obligations for indefinite secrecy regarding surveillance undertaken on behalf of other signatories. That means that your government may be asked to spy on you and the people you love, they may order employees of tech companies to backdoor your account and devices, and that fact will remain secret forever. Forget challenging these sneak-and-peek orders in court – you won’t even know about them:

https://www.eff.org/deeplinks/2024/06/un-cybercrime-draft-convention-blank-check-unchecked-surveillance-abuses

Now here’s the kicker: while this treaty creates broad powers to fight things governments dislike, simply by branding them “cybercrime,” it actually undermines the fight against cybercrime itself. Most cybercrime involves exploiting security defects in devices and services – think of ransomware attacks – and the Cybercrime Treaty endangers the security researchers who point out these defects, creating grave criminal liability for the people we rely on to warn us when the tech vendors we rely upon have put us at risk.

[…]

When it comes to warnings about the defects in their own products, corporations have an irreconcilable conflict of interest. Time and again, we’ve seen corporations rationalize their way into suppressing or ignoring bug reports. Sometimes, they simply delay the warning until they’ve concluded a merger or secured a board vote on executive compensation.

Sometimes, they decide that a bug is really a feature

Note: Responsible disclosure is something people should really “get” by now.

[…]

The idea that users are safer when bugs are kept secret is called “security through obscurity” and no one believes in it – except corporate executives

[…]

The spy agencies have an official doctrine defending this reckless practice: they call it “NOBUS,” which stands for “No One But Us.” As in: “No one but us is smart enough to find these bugs, so we can keep them secret and use them attack our adversaries, without worrying about those adversaries using them to attack the people we are sworn to protect.”

NOBUS is empirically wrong.

[…]

The leak of these cyberweapons didn’t just provide raw material for the world’s cybercriminals, it also provided data for researchers. A study of CIA and NSA NOBUS defects found that there was a one-in-five chance of a bug that had been hoarded by a spy agency being independently discovered by a criminal, weaponized, and released into the wild.

[…]

A Cybercrime Treaty is a good idea, and even this Cybercrime Treaty could be salvaged. The member-states have it in their power to accept proposed revisions that would protect human rights and security researchers, narrow the definition of “cybercrime,” and mandate transparency. They could establish member states’ powers to refuse illegitimate requests from other countries:

https://www.eff.org/press/releases/media-briefing-eff-partners-warn-un-member-states-are-poised-approve-dangerou

 

Source: Pluralistic: Holy CRAP the UN Cybercrime Treaty is a nightmare (23 Jul 2024) – Pluralistic: Daily links from Cory Doctorow

Google isn’t killing third-party cookies in Chrome after all in move that surprises absolutely no-one.

Google won’t kill third-party cookies in Chrome after all, the company said on Monday. Instead, it will introduce a new experience in the browser that will allow users to make informed choices about their web browsing preferences, Google announced in a blog post. Killing cookies, Google said, would adversely impact online publishers and advertisers. This announcement marks a significant shift from Google’s previous plans to phase out third-party cookies by early 2025.

[…]

Google will now focus on giving users more control over their browsing data, Chavez wrote. This includes additional privacy controls like IP Protection in Chrome’s Incognito mode and ongoing improvements to Privacy Sandbox APIs.

Google’s decision provides a reprieve for advertisers and publishers who rely on cookies to target ads and measure performance. Over the past few years, the company’s plans to eliminate third-party cookies have been riding on a rollercoaster of delays and regulatory hurdles. Initially, Google aimed to phase out these cookies by the end of 2022, but the deadline was pushed to late 2024 and then to early 2025 due to various challenges and feedback from stakeholders, including advertisers, publishers, and regulatory bodies like the UK’s Competition and Markets Authority (CMA).

In January 2024, Google began rolling out a new feature called Tracking Protection, which restricts third-party cookies by default for 1% of Chrome users globally. This move was perceived as the first step towards killing cookies completely. However, concerns and criticism about the readiness and effectiveness of Google’s Privacy Sandbox, a collection of APIs designed to replace third-party cookies, prompted further delays.

The CMA and other regulatory bodies have expressed concerns about Google’s Privacy Sandbox, fearing it might limit competition and give Google an unfair advantage in the digital advertising market. These concerns have led to extended review periods and additional scrutiny, complicating Google’s timeline for phasing out third-party cookies. Shortly after Google’s Monday announcement, the CMA said that it was “considering the impact” of Google’s change of direction.

Source: Google isn’t killing third-party cookies in Chrome after all

Meta and Apple are Keeping their Next Big AI things Out of the EU – that’s a good thing

[…]

In a statement to The Verge, Meta spokesperson Kate McLaughlin said that the company’s next-gen Llama AI model is skipping Europe, placing the blame squarely on regulations. “We will release a multimodal Llama model over the coming months,” Mclaughlin said, “but not in the EU due to the unpredictable nature of the European regulatory environment.”

A multimodal model is one that can incorporate data between multiple mediums, like video and text, and use them together while calculating. It makes AI more powerful, but also gives it more access to your device.

The move actually follows a similar decision from Apple, which said in June that it would be holding back Apple Intelligence in the EU due to the Digital Markets Act, or DMA, which puts heavy scrutiny on certain big tech “gatekeepers,” Apple and Meta both among them.

Meta’s concerns here could be less related to the DMA and more to the new AI Act, which recently finalized compliance deadlines and will force companies to make allowances for copyright and transparency starting August 2, 2026. Certain AI use cases, like those that try to read the emotions of schoolchildren, will also be banned. As the company tries to get a hold of AI on its social media platforms, increasing pressure is the last thing it needs.

How this will affect AI-forward Meta products like Ray-Ban smart glasses remains to be seen. Meta told The Verge that future multimodal AI releases will continue to be excluded from Europe, but that text-only model updates will still come to the region.

While the EU has yet to respond to Meta’s decision, EU competition regulator Margrethe Vestager previously called Apple’s plan to keep Apple Intelligence out of the EU a “stunning open declaration” of anticompetitive behavior.

Source: Meta Is Keeping Its Next Big AI Update Out of the EU | Lifehacker

Why is this good? Because the regulatory environment is predictable and run by rules that enforce openness, security, privacy and fair competition. The fact that Apple and Meta don’t want to run this in the EU shows that they are either incapable or unwilling to comply with points that are good for the people. You should not want to do business with shady dealers like that.

Firefox’s New ‘Privacy’ Feature Actually Gives Your Data to Advertisers – How and Why to Disable Firefox’s ‘Privacy-Preserving’ Ad Measurements

Firefox finds itself in a tricky position at times, because it wants to be a privacy friendly browser, but most of its funding comes from Google, whose entire business is advertising. With Firefox 128, the browser has introduced ‘privacy-preserving ad measurement,’ which is enabled by default. Despite the name, the actual implications of the feature has users upset.

What ‘privacy-preserving ad measurement’ means

In a blog post, Firefox’s parent company Mozilla has explained that this new feature is an experiment designed to shape a web standard for advertisers, one that relies less on cookies but still tracks you in some way. Mozilla says privacy-preserving ad measurement is only being used by a handful of sites at the moment, in order to tell if their ads were successful or not.

[…]

ith privacy-preserving ad measurement, sites will be able to ask Firefox if people clicked on an ad, and if they ended up doing something the ad wanted them to (such as buying a product). Firefox doesn’t give this data directly to advertisers, but encrypts it, aggregates it, and submits it anonymously. This means that your browsing activity and other data about you is hidden from the advertiser, but they can see if their campaign delivered results or not. It’s a similar feature to those in Chrome’s Privacy Sandbox, although Google itself has run into regulatory issues implementing them.

Why you should disable this feature

Even though Mozilla’s intentions appear to be genuine, this feature should never have been enabled by default, as no matter its label, it still does technically give advertisers your data. When advertisers started tracking people online, there were no privacy protections, laws, or standards to follow, and the industry chose to track all the data that it could lay its hands on. No one ever asked users if they wanted to be tracked, or if they wanted to give advertisers access to their location, browser data, or personal preferences. If I’ve learned one thing from the way the online ad industry evolved, it’s that people should have a choice in whether their data is being tracked. Even if it seeks to replace even more invasive systems, Firefox should have offered people a choice to opt into ad measurement, instead of enabling it silently

[…]

To disable privacy-preserving ad measurement in Firefox 128, click the three-lines icon in the top-right corner in the browser. Then, go to Settings > Privacy & Security and scroll down to the Website Advertising Preferences section. There, disable Allow websites to perform privacy-preserving ad measurement.

Source: How and Why to Disable Firefox’s ‘Privacy-Preserving’ Ad Measurements | Lifehacker

Only 5 years too late: British regulators to examine Big Tech’s digital wallets – and where is the EU?

British regulators said on Monday they were looking into the soaring use of digital wallets offered by Big Tech firms, including whether there are any competition, consumer protection or market integrity concerns.
The Financial Conduct Authority and Payments Systems Regulator is seeking views on the benefits and risks, and will assess the impact digital wallets, such as Apple Pay, Google Pay and PayPal, have on competition and choice of payment options at checkout, among other things.
Digital wallets are now likely used by more than half of UK adults and have become “an increasingly important touchpoint” between Big Tech companies and UK consumers, they said in a statement.
“Digital wallets are steadily becoming a go-to payment type and while this presents exciting opportunities, there might be risks too,” said David Geale, the PSR’s managing director.
Nikhil Rathi, the FCA’s chief executive, said the growth of digital wallets represented a “seismic shift” in how people pay and regulators wanted to maximise the opportunities while “protecting against any risks this technology may present.”
Regulators and lawmakers in Europe and the United States have been examining the growing role of Big Tech in financial services.
The U.S. consumer watchdog last year proposed regulating payments and smartphone wallets, prompting criticism from the industry.
The British regulators said their review of digital wallets built on their previous work on contactless mobile payments and on the role of Big Tech firms in financial services.
After considering all feedback, the regulators provide an update on Big Tech and digital wallets by the first quarter of 2025.

Source: British regulators to examine Big Tech’s digital wallets | Reuters

Considering that people using the services generally don’t understand that they are giving their payment history to the big tech company that runs it – and is not a bank – this is way way way too late.

Dutch DPA gets off its’ ass, Fine of 600,000 euros for tracking cookies on Kruidvat.nl – detected in 2020

The Dutch Data Protection Authority (AP) has imposed a fine of 600,000 euros on the company behind the Kruidvat drugstore. Kruidvat.nl followed consumers with tracking cookies, without their knowledge or permission. AS Watson collected and used sensitive personal data from millions of website visitors against the rules.

The company behind Kruidvat collected data from website visitors and was able to create personal profiles. In addition to visitors’ location data, this included which pages they visited, which products they added to the shopping cart and purchased and which recommendations they clicked on.

That is very sensitive information, AP points out, due to the specific nature of drugstore products. Such as pregnancy tests, contraceptives or medication for all kinds of ailments. That sensitive information, linked to the location (which may be traceable via the IP address) of the unique visitor, can sketch a very specific and invasive profile of the people who visit Kruidvat.nl.

Kruidvat.nl should have asked permission to place tracking cookies on visitors’ computers. The GDPR privacy law sets a number of requirements for valid consent. These requirements are that consent must be given freely, for specific processing of personal data, on the basis of sufficient information and that there must be no doubt that consent has been given.

In the cookie banner on Kruidvat.nl, the boxes to agree to the installation of tracking software were checked by default. That’s not allowed. Visitors who still wanted to refuse the cookies had to go through many steps to achieve this. The AP has found that personal data of website visitors to Kruidvat.nl have been processed unlawfully.

At the end of 2019, the AP started an investigation into various websites, including Kruidvat.nl. The AP tested whether these websites met the requirements for placing (tracking) cookies. The AP checked whether permission for tracking cookies was asked from website visitors and, if so, how exactly this happened.

Kruidvat.nl was found not to comply in April 2020, after which the AP sent the company a letter. In 2020, the AP found that Kruidvat.nl was still not in order. The AP then started investigating this website further. This violation ended in October 2020.

There is increasing social irritation about cookies and cookie notifications, ranging from annoying and misleading banners to concerns about the secret tracking of internet users. In 2024, the AP will check more often whether websites correctly request permission for tracking cookies or other tracking software.

Source: Boete van 600.000 euro voor tracking cookies op Kruidvat.nl – Emerce

WTFBBQ?! Firefox Starts collecting personal ad preferences

In a world where so much of our lives depend on the use of online services, the web browser used to access those services becomes of crucial importance. It becomes a question of whether we trust the huge corporate interests which control this software with such access to our daily lives, and it is vital that the browser world remains a playing field with many players in the game.

The mantle has traditionally fallen upon Mozilla’s Firefox browser to represent freedom from corporate ownership, but over the last couple of years even they have edged away from their open source ethos and morphed into an advertising company that happens to have a browser. We’re asking you: can we still trust Mozilla’s Firefox, when the latest version turns on ad measurement by default?

Such has been the dominance of Google’s Chromium in the browser world, that it becomes difficult to find alternatives which aren’t based on it. We can see the attraction for developers, instead of pursuing the extremely hard task of developing a new browser engine, just use one off-the-shelf upon which someone else has already done the work. As a result, once you have discounted browsers such as the venerable Netsurf or Dillo which are cool as heck but relatively useless for modern websites, the choices quickly descend into the esoteric. There are Ladybird and Servo which are both promising but still too rough around the edges for everyday use, so what’s left? Probably LibreWolf represents the best option, a version of Firefox with a focus on privacy and security.

[…]

Source: Ask Hackaday: Has Firefox Finally Gone Too Far? | Hackaday

Many comments in the thread in the source. Definitely worth looking at.

Apple settles EU case by opening its iPhone payment system to rivals

The EU on Thursday accepted Apple’s pledge to open its “tap to pay” iPhone payment system to rivals as a way to resolve an antitrust case and head off a potentially hefty fine.

The European Commission, the EU’s executive arm and top antitrust enforcer, said it approved the commitments that Apple offered earlier this year and will make them legally binding.

Regulators had accused Apple in 2022 of abusing its dominant position by limiting access to its mobile payment technology.

Apple responded by proposing in January to allow third-party mobile wallet and payment service providers access to the contactless payment function in its iOS operating system. After Apple tweaked its proposals following testing and feedback, the commission said those “final commitments” would address its competition concerns.

“Today’s commitments end our Apple Pay investigation,” Margrethe Vestager, the commission’s executive vice-president for competition policy, told a press briefing in Brussels. “The commitments bring important changes to how Apple operates in Europe to the benefit of competitors and customers.”

Apple said in a prepared statement that it is “providing developers in the European Economic Area with an option to enable NFC [near-field communication] contactless payments and contactless transactions” for uses like car keys, corporate badges, hotel keys and concert tickets.

[…]

The EU deal promises more choice for Europeans. Vestager said iPhone users will be able to set a default wallet of their choice while mobile wallet developers will be able to use important iPhone verification functions like Face ID.

[…]

Analysts said there would be big financial incentives for companies to use their own wallets rather than letting Apple act as the middleman, resulting in savings that could trickle down to consumers. Apple charges banks 0.15% for each credit card transaction that goes through Apple Pay, according to the justice department’s lawsuit.

Apple must open up its payment system in the EU’s 27 countries plus Iceland, Norway and Liechtenstein by 25 July.

“As of this date, developers will be able to offer a mobile wallet on the iPhone with the same ‘tap-and-go’ experience that so far has been reserved for Apple Pay,” Vestager said. The changes will remain in force for a decade and will be monitored by a trustee.

Breaches of EU competition law can draw fines worth up to 10% of a company’s annual global revenue, which in Apple’s case could have amounted to tens of billions of euros.

“The main advantage to the issuer bank of supporting an alternative to Apple Pay via iPhone is the reduction in fees incurred, which can be substantial,” said Philip Benton, a principal analyst at research and advisory firm Omdia. To encourage iPhone users to switch away from Apple Pay to another mobile wallet, “the fee reduction needs to be partially passed onto the consumer” through benefits like cashback or loyalty rewards, he said.

Banks and consumers could also benefit in other ways.

If companies use their own apps for tap-and-go payments, they would get “full visibility” of their customers’ transactions, said Ben Wood, chief analyst at CCS Insight. That data would allow them to “build brand loyalty and trust and offer more personalised services, rewards and promotions directly to the user”, he said.

Source: Apple settles EU case by opening its iPhone payment system to rivals | Apple | The Guardian

Note: Currently, Apple has this full visibility of your transactions. Are you sure you want to trust a company like that with your financial data?

I wonder how childishly Apple will handle this, considering how it has gone about “opening up” it’s app store and allowing home screen apps (not really at all)

Why all Chromium browsers tell Google about your CPU, GPU usage? A whitewashing bullshit explanation.

Running a Chromium-based browser, such as Google Chrome or Microsoft Edge? The chances are good it’s quietly telling Google all about your CPU and GPU usage when you visit one of the search giant’s websites.

The feature is, from what we can tell, for performance monitoring and not really for tracking – Google knows who you are and what you’re doing anyway when you’re logged into and using its sites – but it does raise some antitrust concerns in light of Europe’s competition-fostering Digital Markets Act (DMA).

When visiting a *.google.com domain, the Google site can use the API to query the real-time CPU, GPU, and memory usage of your browser, as well as info about the processor you’re using, so that whatever service is being provided – such as video-conferencing with Google Meet – could, for instance, be optimized and tweaked so that it doesn’t overly tax your computer. The functionality is implemented as an API provided by an extension baked into Chromium – the browser brains primarily developed by Google and used in Chrome, Edge, Opera, Brave, and others.

Non-Chromium-based browsers – such as Mozilla’s Firefox – don’t have that extension, which puts them at a potential disadvantage. Without the API, they may offer a worse experience on Google sites than what’s possible on the same hardware with Google’s own browser, because they can’t provide that live performance info.

There is, however, nothing technically stopping Moz or other browser-engine makers implementing a similar extension itself in Firefox, if they so chose.

Crucially though, websites that compete against Google can’t access the Chromium API. This is where technical solutions start to look potentially iffy in the eyes of Europe’s DMA.

Netherlands-based developer Luca Casonato highlighted the extension’s existence this week on social media, and his findings went viral – with millions of views. We understand at least some people have known about the code for a while now – indeed, it’s all open source and can be found here in the preinstalled extension hangout_services.

That name should give you a clue to its origin. It was developed last decade to provide browser-side functionality to Google Hangouts – a product that got split into today’s Google Meet and Chat. Part of that functionality is logging for Google, upon request, stats about your browser’s use of your machine’s compute resources when visiting a *.google.com domain – such as meet.google.com.

Casonato noted that the extension can’t be disabled in Chrome, at least, and it doesn’t show up in the extension panel. He observed it’s also included in Microsoft Edge and Brave, both of which are Chromium based. We reached out to Casonato for more of his thoughts on this – though given the time differences between him in Europe and your humble vulture in the US, we didn’t immediately hear back.

Explanation

If you’ve read this far there’s probably an obvious question on your mind: What’s to say this API is malicious? We’re not saying that, and neither is Casonato. Google isn’t saying that either.

“Today, we primarily use this extension for two things: To improve the user experience by optimizing configurations for video and audio performance based on system capabilities [and] provide crash and performance issue reporting data to help Google services detect, debug, and mitigate user issues,” a Google spokesperson told us on Thursday.

“Both are important for the user experience and in both cases we follow robust data handling practices designed to safeguard user privacy,” the spokesperson added.

As we understand it, Google Meet today uses the old Hangouts extension to, for one thing, vary the quality of the video stream if the current resolution is proving too much for your PC. Other Google sites are welcome to use the thing, too.

That all said, the extension’s existence could be harmful to competition as far as the EU is concerned – and that seems to be why Casonato pointed it out this week.

Source: Why Chromium tells Google sites about your CPU, GPU usage • The Register

A lovely explanation, but the fact remains that chromium is sending personal information to a central company: Google, without informing users at all. This blanket explanation could be used to whitewash any information they send through Chromium: the contents of your memory? Improving user experience. The position of your mouse on websites? Improving user experience. It just does not wash.

Data breach exposes millions of mSpy spyware customer support tickets

Unknown attackers stole millions of customer support tickets, including personal information, emails to support, and attachments, including personal documents, from mSpy in May 2024. While hacks of spyware purveyors are becoming increasingly common, they remain notable because of the highly sensitive personal information often included in the data, in this case about the customers who use the service.

The hack encompassed customer service records dating back to 2014, which were stolen from the spyware maker’s Zendesk-powered customer support system.

mSpy is a phone surveillance app that promotes itself as a way to track children or monitor employees. Like most spyware, it is also widely used to monitor people without their consent. These kinds of apps are also known as “stalkerware” because people in romantic relationships often use them to surveil their partner without consent or permission.

The mSpy app allows whoever planted the spyware, typically someone who previously had physical access to a victim’s phone, to remotely view the phone’s contents in real-time.

As is common with phone spyware, mSpy’s customer records include emails from people seeking help to surreptitiously track the phones of their partners, relatives, or children, according to TechCrunch’s review of the data, which we independently obtained. Some of those emails and messages include requests for customer support from several senior-ranking U.S. military personnel, a serving U.S. federal appeals court judge, a U.S. government department’s watchdog, and an Arkansas county sheriff’s office seeking a free license to trial the app.

Even after amassing several million customer service tickets, the leaked Zendesk data is thought to represent only the portion of mSpy’s overall customer base who reached out for customer support. The number of mSpy customers is likely to be far higher.

Yet more than a month after the breach, mSpy’s owners, a Ukraine-based company called Brainstack, have not acknowledged or publicly disclosed the breach.

Troy Hunt, who runs data breach notification site Have I Been Pwned, obtained a copy of the full leaked dataset, adding about 2.4 million unique email addresses of mSpy customers to his site’s catalog of past data breaches.

[…]

Some of the email addresses belong to unwitting victims who were targeted by an mSpy customer. The data also shows that some journalists contacted the company for comment following the company’s last known breach in 2018. And, on several occasions, U.S. law enforcement agents filed or sought to file subpoenas and legal demands with mSpy. In one case following a brief email exchange, an mSpy representative provided the billing and address information about an mSpy customer — an alleged criminal suspect in a kidnapping and homicide case — to an FBI agent.

Each ticket in the dataset contained an array of information about the people contacting mSpy. In many cases, the data also included their approximate location based on the IP address of the sender’s device.

[…]

The emails in the leaked Zendesk data show that mSpy and its operators are acutely aware of what customers use the spyware for, including monitoring of phones without the person’s knowledge. Some of the requests cite customers asking how to remove mSpy from their partner’s phone after their spouse found out. The dataset also raises questions about the use of mSpy by U.S. government officials and agencies, police departments, and the judiciary, as it is unclear if any use of the spyware followed a legal process.

[…]

This is the third known mSpy data breach since the company began in around 2010. mSpy is one of the longest-running phone spyware operations, which is in part how it accumulated so many customers.

[…]

the data breach of mSpy’s Zendesk data exposed its parent company as a Ukrainian tech company called Brainstack.

[…]

Source: Data breach exposes millions of mSpy spyware customers | TechCrunch

Inputs, Outputs, and Fair Uses: Unpacking Responses to Journalists’ Copyright Lawsuits

The complaints against OpenAI and Microsoft in New York Times Company v. Microsoft Corporation and Daily News, LP v. Microsoft Corporation include multiple theories––for instance, vicarious copyright infringement, contributory copyright infringement, and improper removal of copyright information. Those theories, however, are ancillary to both complaints’ primary cause of action: direct copyright infringement. While the defendants’ motions to dismiss focus primarily on jettisoning the ancillary claims and acknowledge that “development of record evidence” is necessary for resolving the direct infringement claims, they nonetheless offer insight on how the direct infringement fight might unfurl.

Direct Infringement Via Inputs and Outputs: The Daily News plaintiffs claim that by “building training datasets containing” their copyrighted works without permission, the defendants directly infringe the plaintiffs’ copyrights. Inputting copyrighted material to train Gen AI tools, they aver, constitutes direct infringement. Regarding outputs, the Daily News plaintiffs assert that “by disseminating generative output containing copies and derivatives of the” plaintiffs’ content, the defendants’ tools also infringe the plaintiffs’ copyrights. The Daily News’s input (illicit training) and output (disseminating copies) allegations track earlier contentions of The New York Times Company.

Fair Use Inputs and “Fringe” Outputs: OpenAI’s June arguments in Daily News frame “the core issue”––one OpenAI says “is for a later stage of the litigation” because discovery must first generate a factual record––facing New York City-based federal judge Sidney Stein as “whether using copyrighted content to train a generative AI model is fair use under copyright law.” Fair use, a defense to copyright infringement, involves analyzing four statutory factors: 1) the purpose and character of the allegedly infringing use; 2) the nature of copyrighted work allegedly infringed upon; 3) the amount of the copyrighted work infringed upon and whether the amount, even if small, nonetheless goes to the heart of the work; and 4) whether the infringing use will harm the market value of (or serve as a market substitute for) the original copyrighted work.

So, how might ingesting copyrighted journalistic content––the training or input aspect of the alleged infringement––be a protected fair use? Microsoft argues in Daily News that its “and OpenAI’s tools [don’t] exploit the protected expression in the Plaintiffs’ digital content.” (emphasis added). That’s a key point because copyright law does not protect things like facts, “titles, names, short phrases, and slogans.” OpenAI asserts, in response to The New York Times Company’s lawsuit, that “no one . . . gets to monopolize facts or the rules of language.” Learning semantic rules and patterns of “language, grammar, and syntax”––predicting which words are statistically most likely to follow others––is, at bottom, the purpose of the fair use to which OpenAI and Microsoft say they’re putting newspaper articles. They’re ostensibly just leveraging copyrighted articles “internally” (emphasis in original) to identify and learn language patterns, not to reproduce the articles in which those words appear.

More fundamentally, OpenAI and Microsoft aren’t attempting to disseminate copies of what copyright law is intended to incentivize and protect––“original works of authorship” and “writings.” They aren’t, the defendants claim, trying to unfairly produce market substitutes for actual newspaper articles.

How, then, do they counter the newspapers’ output infringement allegations that the defendants’ tools sometimes produce verbatim versions of the newspapers’ copyrighted articles? OpenAI contends such regurgitative outcomes “depend on an elaborate effort [by the defendants] to coax such outputs from OpenAI’s products, in a way that violates the operative OpenAI terms of service and that no normal user would ever even attempt.” Regurgitations otherwise are “rare” and “unintended,” the company adds. Barring settlements, courts will examine the input and output infringement battles in the coming months and years.

Source: Inputs, Outputs, and Fair Uses: Unpacking Responses to Journalists’ Copyright Lawsuits | American Enterprise Institute – AEI

Sharing material used to be the norm for newspapers, and should be for LLMs

Even though parents insist that it is good and right to share things, the copyright world has succeeded in establishing the contrary as the norm. Now, sharing is deemed a bad, possibly illegal thing. But it was not always thus, as a fascinating speech by Ryan Cordell, Associate Professor in the School of Information Sciences and Department of English at the University of Illinois Urbana-Champaign, underlines. In the US in the nineteenth century, newspaper material was explicitly not protected by copyright, and was routinely exchanged between titles:

Nineteenth-century editors’ attitude toward text reuse is exemplified in a selection that circulated in the last decade of the century, though often abbreviated from the version I cite here, which insists that “an editor’s selections from his contemporaries” are “quite often the best test of his editorial ability, and that the function of his scissors are not merely to fill up vacant spaces, but to reproduce the brightest and best thoughts…from all sources at the editor’s command.” While noting that sloppy or lazy selection will produce “a stupid issue,” this piece claims that just as often “the editor opens his exchanges, and finds a feast for eyes, heart and soul…that his space is inadequate to contain.” This piece ends by insisting “a newspaper’s real value is not the amount of original matter it contains, but the average quality of all the matter appearing in its columns whether original or selected.”

Material was not only copied verbatim, but modified and built upon in the process. As a result of this constant exchange, alteration and enhancement, newspaper readers in the US enjoyed a rich ecosystem of information, and a large number of titles flourished, since the cost of producing suitable material for each of them was shared and thus reduced.

That historical fact in itself is interesting. It’s also important at a time when newspaper publishers are some of the most aggressive in demanding ever stronger – and ever more disproportionate – copyright protection for their products, for example through “link taxes”. But Cordell’s speech is not simply backward looking. It goes on to make another fascinating observation, this time about large language models (LLMs):

We can see in the nineteenth-century newspaper exchanges a massive system for recycling and remediating culture. I do not wish to slip into hyperbole or anachronism, and will not claim historical newspapers as a precise analogue for twenty-first century AI or large language models. But it is striking how often metaphors drawn from earlier media appear in our attempts to understand and explain these new technologies.

The whole speech is well worth reading as a useful reminder that the current copyright panic over LLMs is in part because we have forgotten that sharing material and helping others to build on it was once the norm. And despite blinkered and selfish views to the contrary, it is still the right thing to do, just as parents continue to tell their children.

Source: Sharing material used to be the norm for newspapers, and should be for LLMs – Walled Culture

Report finds most subscription services manipulate customers with ‘dark patterns’

Most subscription sites use “dark patterns” to influence customer behavior around subscriptions and personal data, according to a pair of new reports from global consumer protection groups. Dark patterns are “practices commonly found in online user interfaces [that] steer, deceive, coerce or manipulate consumers into making choices that often are not in their best interests.” The international research efforts were conducted by the International Consumer Protection and Enforcement Network (ICPEN) and the Global Privacy Enforcement Network (GPEN).

The ICPEN conducted the review of 642 websites and mobile apps with a subscription component. The assessment revealed one dark pattern in use at almost 76 percent of the platforms, and multiple dark patterns at play in almost 68 percent of them. One of the most common dark patterns discovered was sneaking, where a company makes potentially negative information difficult to find. ICPEN said 81 percent of the platforms with automatic subscription renewal kept the ability for a buyer to turn off auto-renewal out of the purchase flow. Other dark patterns for subscription services included interface interference, where desirable actions are easier to perform, and forced action, where customers have to provide information to access a particular function.

The companion report from GPEN examined dark patterns that could encourage users to compromise their privacy. In this review, nearly all of the more than 1,000 websites and apps surveyed used a deceptive design practice. More than 89 percent of them used complex and confusing language in their privacy policies. Interface interference was another key offender here, with 57 percent of the platforms making the least protective privacy option the easiest to choose and 42 percent using emotionally charged language that could influence users.

Even the most savvy of us can be influenced by these subtle cues to make suboptimal decisions. Those decisions might be innocuous ones, like forgetting that you’ve set a service to auto-renew, or they might put you at risk by encouraging you to reveal more personal information than needed. The reports didn’t specify whether the dark patterns were used in illicit or illegal ways, only that they were present. The dual release is a stark reminder that digital literacy is an essential skill.

Source: Report finds most subscription services manipulate customers with ‘dark patterns’

The US Supreme Court’s Contempt for Facts Is a Betrayal of Justice

When the Supreme Court’s Ohio v. EPA decision blocked Environmental Protection Agency limits on Midwestern states polluting their downwind neighbors, a sad but telling coda came in Justice Neil Gorsuch’s opinion. In five instances, it confused nitrogen oxide, a pollutant that contributes to ozone formation, with nitrous oxide, better known as laughing gas.

You can’t make this stuff up. This repeated mistake in the 5-4 decision exemplifies a high court not just indifferent to facts but contemptuous of them.

Public trust in the Supreme Court, already at a historic low, is now understandably plunging. In the last four years, a reliably Republican majority on the high court, led by Chief Justice John Roberts, has embarked on a remarkable spree against history and reality itself, ignoring or eliding facts in decisions involving school prayer, public health, homophobia, race, climate change, abortion and clean water, not to mention the laughing gas case.

The crescendo to this assault on expertise landed in June, when the majority’s Chevron decision arrogated to the courts regulatory calls that have been made by civil servant scientists, physicians and lawyers for the last 40 years. (With stunning understatement, the Associated Press called it “a far-reaching and potentially lucrative victory to business interests.” No kidding.) The decision enthrones the high court—an unelected majority—as a group of technically incompetent, in some cases corrupt, politicos in robes with power over matters that hinge on vital facts about pollution, medicine, employment and much else. These matters govern our lives.

The 2022 Kennedy v. Bremerton School District school prayer decision hinged on a fable of a football coach offering “a quiet personal prayer,” in the words of the opinion. In reality, this coach was holding overt post-game prayer meetings on the 50-yard line, ones that an atheist player felt compelled to attend to keep off the bench. Last year’s 303 Creative v. Elenis decision, allowing a Web designer to discriminate against gay people, revolved entirely on a supposed request for a gay wedding website that never existed that (supposedly) came from a straight man who never made the request. Again, you can’t make this stuff up. Unless you are on the Supreme Court. Then it becomes law.

Summing up the Court’s term on July 1, the legal writer Chris Geidner called attention to a more profound “important and disturbing reality” of the current majority’s relationship to facts. “When it needs to decide a matter for the right, it can and does accept questionable, if not false, claims as facts. If the result would benefit the left, however, there are virtually never enough facts to reach a decision.”

The “laughing gas” decision illustrates this nicely: EPA had asked 23 states to submit a state-based plan to reduce their downwind pollution. Of those, 21 proposed to do nothing to limit their nitrogen (not nitrous) oxide emissions. Two others didn’t even respond to that extent. Instead of telling the states to cut their pollution as required by law, the Court’s majority invented a new theoretical responsibility for EPA—to account for future court cases keeping a state out of its Clean Air Act purview—and sent the case back to an appeals court.

Source: The Supreme Court’s Contempt for Facts Is a Betrayal of Justice | Scientific American

And that’s not even talking about giving sitting presidents immunity from criminal behaviour either!

Speed limiters arrive for all new cars in the European Union

It was a big week for road safety campaigners in the European Union as Intelligent Speed Assistance (ISA) technology became mandatory on all new cars.

The rules came into effect on July 7 and follow a 2019 decision by the European Commission to make ISA obligatory on all new models and types of vehicles introduced from July 2022. Two years on, and the tech must be in all new cars.

European legislators reckon that the rules will make for safer roads. However, they will also add to the ever-increasing amount of technology rolling around the continent’s highways. While EU law has no legal force in the UK, it’s hard to imagine many manufacturers making an exemption for Britain.

So how does it work? In the first instance, the speed limit on a given road can be detected by using data from a Global Navigation Satellite System (GNSS) – such as Global Positioning System (GPS) – and a digital map to come up with a speed limit. This might be combined with physical sign recognition.

If the driver is being a little too keen, the ISA system must notify them that the limit has been exceeded but, according to the European Road Safety Charter “not to restrict his/her possibility to act in any moment during driving.”

“The driver is always in control and can easily override the ISA system.”

There are four options available to manufacturers according to the regulations. The first two, a cascaded acoustic or vibrating warning, don’t intervene, while the latter two, haptic feedback through the acceleration pedal and a speed limiter, will. The European Commission noted, “Even in the case of speed control function, where the car speed will be automatically gently reduced, the system can be smoothly overridden by the driver by pressing the accelerator pedal a little bit deeper.”

The RAC road safety spokesperson Rod Dennis said: “While it’s not currently mandated that cars sold in the UK have to be fitted with Intelligent Speed Assistance (ISA) systems, we’d be surprised if manufacturers deliberately excluded the feature from those they sell in the UK as it would add unnecessary cost to production.”

This writer has driven a car equipped with the technology, and while it would be unfair to name and shame particular manufacturers, things are a little hit-and-miss. Road signs are not always interpreted correctly, and maps are not always up to date, meaning the car is occasionally convinced that the speed limit differs from reality, with various beeps and vibrations to demonstrate its belief.

Dennis cautioned, “Anyone getting a new vehicle would be well advised to familiarise themselves with ISA and how it works,” and we would have to agree.

While it is important to understand that the technology is still a driver aid and can easily be overridden, it is not hard to detect the direction of travel.

Source: Speed limiters arrive for all new cars in the European Union • The Register

Paramount Axes Decades Of Comedy Central History In Latest Round Of Brunchlord Dysfunction

Last month we noted how the brunchlords in charge of Paramount (CBS) decided to eliminate decades of MTV News journalism history as part of their ongoing “cost saving” efforts. It was just the latest casualty in an ever-consolidating and very broken U.S. media business routinely run by some of the least competent people imaginable.

We’ve noted how with streaming growth slowing, there’s no longer money to be made goosing stock valuations via subscriber growth. So media giants (and the incompetent brunchlords that usually fail upward into positions of unearned power within them) have turned their attention to all the usual tricks: layoffs, pointless megamergers, price hikes, and more and more weird and costly consumer restrictions.

Part of that equation also involves being too cheap to preserve history, as we’ve seen countless times when a journalism or media company implodes and then immediately disappears not just staffers but decades of their hard work. Usually (and this is from my experience as a freelancer) without any warning or consideration of the impact whatsoever.

Paramount has been struggling after its ingenious strategy of making worse and worse streaming content while charging more and more money somehow hasn’t panned out. While the company looks around for merger and acquisition partners, they’ve effectively taken a hatchet to company staff and history.

First with the recent destruction of the MTV News archives and a major round of layoffs, and now with the elimination of years of Comedy Central history. Last week, as part of additional cost cutting moves, the company basically gutted the Comedy Central website, eliminating years of archived video history of numerous programs ranging from old South Park clips to episodes of the The Colbert Report.

A website message and press statement by the company informs users that they can simply head over to the Paramount+ streaming app to watch older content:

As part of broader website changes across Paramount, we have introduced more streamlined versions of our sites, driving fans to Paramount+ to watch their favorite shows.”

Except older episodes of The Daily Show and The Colbert Report can no longer be found on Paramount+, also due to layoffs and cost cutting efforts at the company. Paramount is roughly $14 billion in debt due to mismanagement, and a recent plan to merge with Skydance was scuttled at the last second.

Eventually Paramount will find somebody else to merge with in order to bump stock valuations, nab a fat tax cut, and justify excessive executive compensation (look at me, I’m a savvy dealmaker!). At which point, as we saw with the disastrous AT&T–>Time Warner–>Discovery series of mergers, an entirely new wave of layoffs, quality erosion, and chaos will begin as they struggle to pay off deal debt.

It’s all so profoundly pointless, and at no point does anything like product quality, customer satisfaction, employee welfare, or the preservation of history enter into it. The executives spearheading this repeated trajectory from ill-conceived business models to mindless mergers will simply be promoted to bigger and better ventures because there’s simply no financial incentive to learn from historical missteps.

The executives at the top of the heap usually make out like bandits utterly regardless of competency or outcomes, so why change anything?

Source: Paramount Axes Decades Of Comedy Central History In Latest Round Of Brunchlord Dysfunction | Techdirt