Italy is losing its mind because of copyright: it just made its awful Piracy Shield even worse

Walled Culture has been writing about Italy’s Piracy Shield system for a year now. It was clear from early on that its approach of blocking Internet addresses (IP addresses) to fight alleged copyright infringement – particularly the streaming of football matches – was flawed, and risked turning into another fiasco like France’s failed Hadopi law. The central issue with Piracy Shield is summed up in a recent post on the Disruptive Competition Blog:

The problem is that Italy’s Piracy Shield enables the blocking of content at the IP address and DNS level, which is particularly problematic in this time of shared IP addresses. It would be similar to arguing that if in a big shopping mall, in which dozens of shops share the same address, one shop owner is found to sell bootleg vinyl records with pirated music, the entire mall needs to be closed and all shops are forced to go out of business.

As that post points out, Italy’s IP blocking suffers from several underlying problems. One is overblocking, which has already happened, as Walled Culture noted back in March. Another issue is lack of transparency:

The Piracy Shield that has been implemented in Italy is fully automated, which prevents any transparency on the notified IP addresses and lacks checks and balances performed by third parties, who could verify whether the notified IP addresses are exclusively dedicated to piracy (and should be blocked) or not.

Piracy Shield isn’t working, and causes serious collateral damage, but instead of recognising this, its supporters have doubled down, and have just convinced the Italian parliament to pass amendments making it even worse, reported here by TorrentFreak:

VPN and DNS services anywhere on planet earth will be required to join Piracy Shield and start blocking pirate sites, most likely at their own expense, just like Italian ISPs are required to do already.

Moving forward, if pirate sites share an IP address with entirely innocent sites, and the innocent sites are outnumbered, ISPs, VPNs and DNS services will be legally required to block them all.

A new offence has been created that is aimed at service providers, including network access providers, who fail to report promptly illegal conduct by their users to the judicial authorities in Italy or the police there. Maximum punishment is not just a fine, but imprisonment for up to one year. Just why this is absurd is made clear by this LinkedIn comment by Diego Ciulli, Head of Government Affairs and Public Policy, Google Italy (translation by DeepL):

Under the label of ‘combating piracy’, the Senate yesterday approved a regulation obliging digital platforms to notify the judicial authorities of all copyright infringements – present, past and future – of which they become aware. Do you know how many there are in Google’s case? Currently, 9,756,931,770.

In short, the Senate is asking us to flood the judiciary with almost 10 billion URLs – and foresees jail time if we miss a single notification.

If the rule is not corrected, the risk is to do the opposite of the spirit of the law: flooding the judiciary, and taking resources away from the fight against piracy.

The new law will make running an Internet access service so risky that many will probably just give up, reducing consumer choice. Freedom of speech will be curtailed, online security weakened, and Italy’s digital infrastructure will be degraded. The end result of this law will be an overall impoverishment of Italian Internet users, Italian business, and the Italian economy. And all because of one industry’s obsession with policing copyright at all costs

Source: Italy is losing its mind because of copyright: it just made its awful Piracy Shield even worse – Walled Culture

23andMe is on the brink. What happens to all that genetic DNA data?

[…] The one-and-done nature of Wiles’ experience is indicative of a core business problem with the once high-flying biotech company that is now teetering on the brink of collapse. Wiles and many of 23andMe’s 15 million other customers never returned. They paid once for a saliva kit, then moved on.

Shares of 23andMe are now worth pennies. The company’s valuation has plummeted 99% from its $6 billion peak shortly after the company went public in 2021.

As 23andMe struggles for survival, customers like Wiles have one pressing question: What is the company’s plan for all the data it has collected since it was founded in 2006?

[…]

Andy Kill, a spokesperson for 23andMe, would not comment on what the company might do with its trove of genetic data beyond general pronouncements about its commitment to privacy.

[…]

When signing up for the service, about 80% of 23andMe’s customers have opted in to having their genetic data analyzed for medical research.

[…]

The company has an agreement with pharmaceutical giant GlaxoSmithKline, or GSK, that allows the drugmaker to tap the tech company’s customer data to develop new treatments for disease.

Anya Prince, a law professor at the University of Iowa’s College of Law who focuses on genetic privacy, said those worried about their sensitive DNA information may not realize just how few federal protections exist.

For instance, the Health Insurance Portability and Accountability Act, also known as HIPAA, does not apply to 23andMe since it is a company outside of the health care realm.

[…]

According to the company, all of its genetic data is anonymized, meaning there is no way for GSK, or any other third party, to connect the sample to a real person. That, however, could make it nearly impossible for a customer to renege on their decision to allow researchers to access their DNA data.

“I couldn’t go to GSK and say, ‘Hey, my sample was given to you — I want that taken out — if it was anonymized, right? Because they’re not going to re-identify it just to pull it out of the database,” Prince said.

[…]

the patchwork of state laws governing DNA data makes the generic data of millions potentially vulnerable to being sold off, or even mined by law enforcement.

“Having to rely on a private company’s terms of service or bottom line to protect that kind of information is troubling — particularly given the level of interest we’ve seen from government actors in accessing such information during criminal investigations,” Eidelman said.

She points to how investigators used a genealogy website to identify the man known as the Golden State Killer, and how police homed in on an Idaho murder suspect by turning to similar databases of genetic profiles.

“This has happened without people’s knowledge, much less their express consent,” Eidelman said.

[…]

Last year, the company was hit with a major data breach that it said affected 6.9 million customer accounts, including about 14,000 who had their passwords stolen.

[…]

Some analysts predict that 23andMe could go out of business by next year, barring a bankruptcy proceeding that could potentially restructure the company.

[…]

Source: What happens to all of 23andMe’s genetic DNA data? : NPR

For more fun reading about about this clusterfuck of a company and why giving away DNA data is a spectacularly bad idea:

License Plate Readers Are Creating a US-Wide Database of Cars – and political affiliation, planned parenthood and more

At 8:22 am on December 4 last year, a car traveling down a small residential road in Alabama used its license-plate-reading cameras to take photos of vehicles it passed. One image, which does not contain a vehicle or a license plate, shows a bright red “Trump” campaign sign placed in front of someone’s garage. In the background is a banner referencing Israel, a holly wreath, and a festive inflatable snowman.

Another image taken on a different day by a different vehicle shows a “Steelworkers for Harris-Walz” sign stuck in the lawn in front of someone’s home. A construction worker, with his face unblurred, is pictured near another Harris sign. Other photos show Trump and Biden (including “Fuck Biden”) bumper stickers on the back of trucks and cars across America.

[…]

These images were generated by AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers—all while recording the precise locations of these observations.

[…]

The detailed photographs all surfaced in search results produced by the systems of DRN Data, a license-plate-recognition (LPR) company owned by Motorola Solutions. The LPR system can be used by private investigators, repossession agents, and insurance companies; a related Motorola business, called Vigilant, gives cops access to the same LPR data.

[…]

those with access to the LPR system can search for common phrases or names, such as those of politicians, and be served with photographs where the search term is present, even if it is not displayed on license plates.

[…]

“I searched for the word ‘believe,’ and that is all lawn signs. There’s things just painted on planters on the side of the road, and then someone wearing a sweatshirt that says ‘Believe.’” Weist says. “I did a search for the word ‘lost,’ and it found the flyers that people put up for lost dogs and cats.”

Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people’s personal political views and their homes can be recorded into vast databases that can be queried.

[…]

Over more than a decade, DRN has amassed more than 15 billion “vehicle sightings” across the United States, and it claims in its marketing materials that it amasses more than 250 million sightings per month.

[…]

The system is partly fueled by DRN “affiliates” who install cameras in their vehicles, such as repossession trucks, and capture license plates as they drive around. Each vehicle can have up to four cameras attached to it, capturing images in all angles. These affiliates earn monthly bonuses and can also receive free cameras and search credits.

In 2022, Weist became a certified private investigator in New York State. In doing so, she unlocked the ability to access the vast array of surveillance software accessible to PIs. Weist could access DRN’s analytics system, DRNsights, as part of a package through investigations company IRBsearch. (After Weist published an op-ed detailing her work, IRBsearch conducted an audit of her account and discontinued it.

[…]

While not linked to license plate data, one law enforcement official in Ohio recently said people should “write down” the addresses of people who display yard signs supporting Vice President Kamala Harris, the 2024 Democratic presidential nominee, exemplifying how a searchable database of citizens’ political affiliations could be abused.

[…]

In 2022, WIRED revealed that hundreds of US Immigration and Customs Enforcement employees and contractors were investigated for abusing similar databases, including LPR systems. The alleged misconduct in both reports ranged from stalking and harassment to sharing information with criminals.

[…]

 

Source: License Plate Readers Are Creating a US-Wide Database of More Than Just Cars | WIRED

Insecure Robot Vacuums From Chinese Company Deebot Collect Photos and Audio to Train Their AI

Ecovacs robot vacuums, which have been found to suffer from critical cybersecurity flaws, are collecting photos, videos and voice recordings — taken inside customers’ houses — to train the company’s AI models.

The Chinese home robotics company, which sells a range of popular Deebot models in Australia, said its users are “willingly participating” in a product improvement program.

When users opt into this program through the Ecovacs smartphone app, they are not told what data will be collected, only that it will “help us strengthen the improvement of product functions and attached quality”. Users are instructed to click “above” to read the specifics, however there is no link available on that page.

Ecovacs’s privacy policy — available elsewhere in the app — allows for blanket collection of user data for research purposes, including:

– The 2D or 3D map of the user’s house generated by the device
– Voice recordings from the device’s microphone
— Photos or videos recorded by the device’s camera

“It also states that voice recordings, videos and photos that are deleted via the app may continue to be held and used by Ecovacs…”

Source: Insecure Robot Vacuums From Chinese Company Deebot Collect Photos and Audio to Train Their AI

Dutch oppose Hungary’s approach to EU child sexual abuse regulation – or total surveillance of every smart device

The Netherlands’ government and opposition are both against the latest version of the controversial EU regulation aimed at detecting online child sexual abuse material (CSAM), according to an official position and an open letter published on Tuesday (1 October).

The regulation, aimed at detecting online CSAM, has been criticised for potentially allowing the scanning of private messages on platforms such as WhatsApp or Gmail.

However, the latest compromise text, dated 9 September, limits detection to known material, among other changes. ‘Known’ material refers to content that has already been circulating and detected, in contrast to ‘new’ material that has not yet been identified.

The Hungarian presidency of the Council of the EU shared a partial general approach dated 24 September and seen by Euractiv, that mirrors the 9 September text but reduces the reevaluation period from five years to three for grooming and new CSAM.

Limiting detection to known material could hinder authorities’ ability to surveil massive amounts of communications, suggesting the change is likely an attempt to reconcile privacy concerns.

The Netherlands initially supported the proposal to limit detection to ‘known’ material but withdrew its support in early September, Euractiv reported.

On Tuesday (1 October), Amsterdam officially took a stance against the general approach, despite speculation last week suggesting the country might shift its position in favour of the regulation.

This is also despite the Dutch mostly maintaining that their primary concern lies with combating known CSAM – a focus that aligns with the scope of the latest proposal.

According to various statistics, the Netherlands hosts a significant amount of CSAM.

The Dutch had been considering supporting the proposal, or at least a “silent abstention” that might have weakened the blocking minority, signalling a shift since Friday (27 September), a source close to the matter told Euractiv.

While a change in the Netherlands’ stance could have affected the blocking minority in the EU Council, their current position now strengthens it.

If the draft law were to pass in the EU Council, the next stage would be interinstitutional negotiations, called trilogues, between the European Parliament, the Council of the EU, and the Commission to finalise the legislation.

Both the Dutch government and the opposition are against supporting the new partial general approach.

Opposition party GroenLinks-PvdA (Greens/EFA) published an open letter, also on Tuesday, backed by a coalition of national and EU-based private and non-profit organisations, urging the government to vote against the proposal.

According to the letter, the regulation will be discussed at the Justice and Home Affairs Council on 11 October, with positions coordinated among member states on 2 October.

Currently, an interim regulation allows companies to detect and report online CSAM voluntarily. Originally set to expire in 2024, this measure has been extended to 2026 to avoid a legislative gap, as the draft for a permanent law has yet to be agreed.

The Dutch Secret Service opposed the draft regulation because “introducing a scan application on every mobile phone” with infrastructure to manage the scans would be a complex and extensive system that would introduce risks to digital resilience, according to a decision note.

Source: Dutch oppose Hungary’s approach to EU child sexual abuse regulation – Euractiv

To find out more about how invasive the proposed scanning feature is, look through the articles here: https://www.linkielist.com/?s=csam

Mazda’s $10 Subscription For Remote Start Sparks Backlash After Killing Open Source Option

Mazda recently surprised customers by requiring them to sign up for a subscription in order to keep certain services. Now, notable right-to-repair advocate Louis Rossmann is calling out the brand. He points to several moves by Mazda as reasons for his anger toward them. However, it turns out that customers might still have a workaround.

Previously, the Japanese carmaker offered connected services, that included several features such as remote start, without the need for a subscription. At the time, the company informed customers that these services would eventually transition to a paid model.

More: Native Google Maps Won’t Work On New GM Cars Without $300 Subscription

It’s important to clarify that there are two very different types of remote start we’re talking about here. The first type is the one many people are familiar with where you use the key fob to start the vehicle. The second method involves using another device like a smartphone to start the car. In the latter, connected services do the heavy lifting.

Transition to paid services

What is wild is that Mazda used to offer the first option on the fob. Now, it only offers the second kind, where one starts the car via phone through its connected services for a $10 monthly subscription, which comes to $120 a year. Rossmann points out that one individual, Brandon Rorthweiler, developed a workaround in 2023 to enable remote start without Mazda’s subscription fees.

However, according to Ars Technica, Mazda filed a DMCA takedown notice to kill that open-source project. The company claimed it contained code that violated “[Mazda’s] copyright ownership” and used “certain Mazda information, including proprietary API information.” Additionally, Mazda argued that the project included code providing functionality identical to that found in its official apps available on the Apple App Store and Google Play Store.

That doesn’t mean an aftermarket remote starter kit won’t work though. In fact, with Mazda’s subscription model now in place, it’s not hard to imagine customers flocking to aftermarket solutions to avoid the extra fees. However, by not opting to pay for Mazda Connected Services, owners will also miss out on things like vehicle health reports, remote keyless entry, and vehicle status reports.

A growing trend

Bear in mind that this is just one case of an automaker trying to milk their customers with subscription-based features, which could net them millions in extra income. BMW, for example, installs adaptive suspension hardware in some vehicles but charges $27.50 per month (or $505 for a one-time purchase) to unlock the software that makes the suspension actually work.

And then there’s Ferrari’s plan to offer a battery subscription for extended warranty coverage on its hybrid models for a measly $7,500 per year!

[…]

sure, you might have paid a considerable amount of money to buy your car, and it might legally be yours, but that does not ensure that you really own all of the features it comes with, unless you’re prepared to pay extra.

Source: Mazda’s $10 Subscription For Remote Start Sparks Backlash After Killing Open Source Option | Carscoops

LG Wants to Show You Ads Even When You’re Not Watching TV

The outlet reveals (via Android Authority) that the ads start playing before the screensaver hits the screen and are usually sponsored messages from LG or its partners. The review highlighted one specific ad for the LG Channels app: LG’s free live TV service with ads. FlatpanelsHD adds that according to LG’s ad division, users will soon start seeing ads for other products and services.

The review mentions that “some of the ads” can be disabled, and there’s also an option under ‘Additional Settings’ to disable screensaver ads. But it’s almost sinful to push ads on a $2,400 device.

What makes this whole thing more bizarre is that, according to the review, LG pushes the same ads with the same frequency on its cheaper offerings. Oddly, it does nothing to differentiate the experience of purchasing a high-end model from an entry-level one. The brand’s OLED line is already pricey, but the G4 is allegedly “one of the most expensive TVs on the market,” according to FlatpanelsHD. I can only imagine how this will play out for the South Korean company. As FlatpanelsHD said, “LG must reconsider this strategy if they want to sell high-end TVs.”

Source: LG Wants to Show You Ads Even When You’re Not Watching TV

Unbelievable this

Ford wants to listen in on you in your car to serve you ads as much as possible

ford cars with human ears on their doors driving on a highway

Someday soon, if Ford has its way, drivers and passengers may be bombarded with infotainment ads tailored to their personal and vehicle data.

This sure-to-please-everyone idea comes via a patent application [PDF] filed by Ford Global Technologies late last month that proposes displaying ads to drivers based on their destination, route, who’s in the car, and various other data points able to be collected by modern vehicles.

According to the patent application, infotainment advertising could be varied depending on the situation and user feedback. In one example, Ford supposes showing a visual ad to passengers every 10 minutes while on the highway, and if someone responds positively to audio ads, the system could ramp up the frequency, playing audio ads every five minutes.

Of course, simply playing more ads might frustrate people, which Ford seems to understand because the pending patent notes it would have to account for “a user’s natural inclination to seek minimal or no ads.”

In order to assure advertisers that user preference is ultimately circumvented, Ford said its proposed infotainment system would be designed to “intelligently schedule variable durations of ads, with playing time seeking to maximize company revenue while minimizing the impact on user experience.”

The system would also be able to listen to conversations so it could serve ads during lulls in chatter, ostensibly to be less intrusive while being anything but.

Given the rush by some automakers to turning their vehicles into subscription-based cars-as-a-service, egged on by the chip world, we’re not surprised by efforts to wring more money out of motorists, this time with adverts. We assume patent filings similar to Ford’s have been made.

Trust us!

Then there’s the fact that automakers aren’t terrific on privacy and safeguarding the kinds of info that are used to tailor ads. In September last year, Mozilla published a report on the privacy policies of several automakers whose connected vehicles harvest information about owners, finding that 25 major manufacturers – Ford among them – failed to live up to the Firefox maker’s standards.

Just a couple of months later, a Washington state appeals court ruled it was perfectly legal for vehicles to harvest text and call data from connected smartphones and store it all in memory.

US senators have urged the FTC to investigate several car makers for allegedly selling customer data unlawfully, though we note Ford is not among the companies accused in that matter.

That said, the patent application makes no mention of how the automaker would protect user data used to serve in-vehicle ads. A couple of other potentially privacy-infringing Ford patents from the past year are worth mentioning, too.

The ideas within a patent application should not be viewed as an indication of our product plans

In 2023, Ford filed a patent application for an embedded vehicle system that would automate vehicle repossession if car payments weren’t made. Over the summer, another application describes a system where vehicles monitor each other’s speeds, and if one detects a nearby car speeding, it could snap photos using onboard cameras and send the images, along with speed data, directly to police or roadside monitors. Neither have privacy advocates thrilled.

Bear in mind neither of those patents may ever see the production, and this advertising one might not make it past the “let’s file this patent before the competition just in case” stage of life, either. That’s even what Ford essentially told us.

“Submitting patent applications is a normal part of any strong business as the process protects new ideas and helps us build a robust portfolio of intellectual property,” a Ford spokesperson told The Register. “The ideas described within a patent application should not be viewed as an indication of our business or product plans.”

Ford also said it always puts customers first in development of new products and services, though didn’t directly answer questions about a lack of privacy assurances in the patent application. In any case, it may not actually happen. Until it does.

Source: Who wants in-car ads tailored to your journey, passengers? • The Register

Resistance to Hungarian presidency’s new push for child sexual abuse prevention regulation – because it’s a draconian spying law asking for 100% coverage of digital comms

Resistance to the Hungarian presidency’s approach to the EU’s draft law to combat online child sexual abuse material (CSAM) was still palpable during a member states’ meeting on Wednesday (4 September).

The Hungarian presidency of the Council of the EU aims to secure consensus on the proposed law to combat online child sexual abuse material (CSAM) by October, according to an EU diplomat and earlier reports by Politico.

Hungary has prepared a compromise note on the draft law, also reported by Contexte.

The note, presented at a meeting of ambassadors on Wednesday, seeks political guidance to make progress at the technical level, the EU diplomat told Euractiv.

With the voluntary regime expiring in mid-2026, most member states agree that urgent action is needed, the diplomat continued.

But some member states are still resistant to the Hungarian’s latest approach.

The draft law to detect and remove online child sexual abuse material (CSAM) was removed from the agenda of Thursday’s (20 June) meeting of the Committee of Permanent Representatives (COREPER), who were supposed to vote on it.

Sources close to the matter told Euractiv, that Poland and Germany remain opposed to the proposal, with smaller member states also voicing concerns, potentially forming a blocking minority.

Although France and the Netherlands initially supported the proposal, the Netherlands has since withdrawn its support, and Italy has indicated that the new proposal is moving in the right direction.

As a result, no agreement was reached to move forward.

Currently, an interim regulation allows companies to voluntarily detect and report online CSAM. Originally set to expire in 2024, this measure has been extended to 2026 to avoid a legislative gap, as the draft for a permanent law has yet to be agreed.

Hungary is expected to introduce a concrete textual proposal soon. The goal is to agree on its general approach by October, the EU diplomat said, a fully agreed position among member states which serves as the basis for negotiations with the European Parliament.

Meanwhile, the European Commission is preparing to send a detailed opinion to Hungary regarding the draft law, expected by 30 September, Contexte reported on Wednesday.

[…]

In the text, the presidency also suggested extending the temporary exemption from certain provisions of the ePrivacy Directive, which governs privacy and electronic communications, for new CSAM and grooming.

[…]

Source: Resistance lingers to Hungarian presidency’s new push for child sexual abuse prevention regulation – Euractiv

See also:

The EU Commission’s Alleged CSAM Regulation ‘Experts’ giving them free reign to spy on everyone: can’t be found. OK then.

EU delays decision over continuous spying on all your devices *cough* scanning encrypted messages for kiddie porn

Signal, MEPs urge EU Council to drop law that puts a spy on everyone’s devices

European human rights court says backdooring encrypted comms is against human rights

EU Commission’s nameless experts behind its “spy on all EU citizens” *cough* “child sexual abuse” law

EU Trys to Implement Client-Side Scanning, death to encryption By Personalised Targeting of EU Residents With Misleading Ads

 

Second Circuit Says Libraries Disincentivize Authors To Write Books By Lending Them For Free

What would you think if an author told you they would have written a book, but they wouldn’t bother because it would be available to be borrowed for free from a library? You’d probably think they were delusional. Yet that argument has now carried the day in putting a knife into the back of the extremely useful Open Library from the Internet Archive.

The Second Circuit has upheld the lower court ruling and found that the Internet Archive’s Open Library is not fair use and therefore infringes on the copyright of publishers (we had filed an amicus brief in support of the Archive asking them to remember the fundamental purpose of copyright law and the First Amendment, which the Court ignored).

Even though this outcome was always a strong possibility, the final ruling is just incredibly damaging, especially in that it suggests that all libraries are bad for authors and cause them to no longer want to write. I only wish I were joking. Towards the end of the ruling (as we’ll get to below) it says that while having freely lent out books may help the public in the “short-term” the “long-term” consequences would be that “there would be little motivation to produce new works.

[…]

As you’ll recall, the Open Library is no different than a regular library. It obtains books legally (either through purchase or donation) and then lends out one-to-one copies of those books. It’s just that it lends out digital copies of them. To keep it identical to a regular library, it makes sure that only one digital copy can be lent out for every physical copy it holds. Courts have already determined that digitizing physical books is fair use, and the Open Library has been tremendously helpful to all sorts of people.

The only ones truly annoyed by this are the publishers, who have always hated libraries and have long seen the shift to digital as an open excuse to effectively harm libraries. With licensed ebooks, the publishers have jacked up the prices so that (unlike with regular books), the library can’t just buy a single copy from any supplier and lend it out. Rather, publishers have made it prohibitively expensive to get ebook licenses, which come with ridiculous restrictions on how frequently books can be lent and more.

[…]

The key part of the case is whether or not the Internet Archive’s scanning and lending of books is fair use. The Second Circuit says that it fails the fair use four factors test. On the question of transformative use, the Internet Archive argued that because it was using technology to make lending of books more convenient and efficient, it was clearly transformative. Unfortunately, the court disagrees:

We conclude that IA’s use of the Works is not transformative. IA creates digital copies of the Works and distributes those copies to its users in full, for free. Its digital copies do not provide criticism, commentary, or information about the originals. Nor do they “add[] something new, with a further purpose or different character, altering the [originals] with new expression, meaning or message.” Campbell, 510 U.S. at 579. Instead, IA’s digital books serve the same exact purpose as the originals: making authors’ works available to read. IA’s Free Digital Library is meant to―and does―substitute for the original Works

The panel is not convinced by the massive change in making physical books digitally lendable:

True, there is some “change” involved in the conversion of print books to digital copies. See Infinity Broadcast Corp. v. Kirkwood, 150 F.3d 104, 108 n.2 (2d Cir. 1998) (“[A] change in format . . . is not technically a transformation.”). But the degree of change does not “go beyond that required to qualify as derivative.” Warhol II, 598 U.S. at 529. Unlike transformative works, derivative works “ordinarily are those that re-present the protected aspects of the original work, i.e., its expressive content, converted into an altered form.” Google Books, 804 F.3d at 225. To be transformative, a use must do “something more than repackage or republish the original copyrighted work.” Authors Guild, Inc. v. HathiTrust, 755 F.3d 87, 96 (2d Cir. 2014); see also TVEyes, 883 F.3d at 177 (“[A] use of copyrighted material that merely repackages or republishes the original is unlikely to be deemed a fair use.” (internal quotation marks omitted)). Changing the medium of a work is a derivative use rather than a transformative one.

But, that’s not what a derivative work is? A derivative work is not scanning a book. Scanning a book is making a copy. A derivative work is something like making a movie out of a book. So, this analysis is just fundamentally wrong in saying that this is a derivative work, and thus the rest of the analysis is kinda wonky based on that error.

Tragically, the Court then undermines the important ruling in the Betamax/VCR case that found “time shifting” (recording stuff off your TV) to be fair use, even as it absolutely was repackaging the same content for the same purpose. The Court says that doesn’t matter because it “predated our use of the word ‘transformative’ as a term of art.” But that doesn’t wipe out the case as a binding precedent, even though the Court here acts as though it does.

Sony was decided long before modern technology made it possible for one to view virtually any content at any time. Put in context, the “time-shifting” permitted by the defendant’s tape recorders in Sony was a unique efficiency not widely available at the time, and certainly not offered by the plaintiff-television producer.

So because content is more widely available, this kind of shifting is no longer fair use? How does that make any sense at all?

Then the Court says (incorrectly — as we’ll explain shortly) that there’s really nothing new or different about what the Open Library does:

Here, by contrast, IA’s Free Digital Library offers few efficiencies beyond those already offered by Publishers’ own eBooks.

The problem, though, is that this isn’t quite true. Getting licensed ebooks out from libraries is a difficult and cumbersome practice and requires each library to have a vast ebook collection that none can possibly afford. As this lawsuit went down, more and more authors came out of the woodwork, explaining how research they had done for their books was only possible because of the Open Library and would have been impossible via a traditional library given the lending restrictions and availability restrictions.

[…]

From there, the Court explores whether or not the Internet Archive’s use here was commercial. The lower court said it was because, ridiculously, the Internet Archive had donation links on library pages. Thankfully, the panel here sees how problematic that would be for every non-profit:

We likewise reject the proposition that IA’s solicitation of donations renders its use of the Works commercial. IA does not solicit donations specifically in connection with its digital book lending services―nearly every page on IA’s website contains a link to “Donate” to IA. App’x 6091. Thus, as with its partnership with BWB, any link between the funds IA receives from donations and its use of the Works is too attenuated to render the use commercial. Swatch, 756 F.3d at 83. To hold otherwise would greatly restrain the ability of nonprofits to seek donations while making fair use of copyrighted works. See ASTM I, 896 F.3d at 449 (rejecting the argument that because free distribution of copyrighted industry standards enhanced a nonprofit organization’s fundraising appeal, the use was commercial).

It also disagrees that this use is commercial because there’s a referral link for people to go and buy a copy of the book, saying that’s “too attenuated”:

Any link between the funds IA receives from its partnership with BWB and its use of the Works is too attenuated for us to characterize the use as commercial on that basis

Even so, the lack of commerciality isn’t enough to protect the project on the first factor analysis, and it goes to the publishers.

[…]

Source: Second Circuit Says Libraries Disincentivize Authors To Write Books By Lending Them For Free | Techdirt

There is a lot more, but it’s safe to say that the courts in the US and copyright laws have run amok and are only feeding the rich to the detriment of the poor. Denying people libraries is a step beyond.

Internet Archive loses appeal – 4 greedy publishers shut down major library in insane luddite US law system

The Internet Archive’s appeal could spell further trouble for the non-profit, as it is in the middle of a another copyright lawsuit with music publishers that could cost more than $400m if it loses.

The Internet Archive has been dealt a serious blow in court, as it lost an appeal case to share scanned books without the approval of publishers.

The loss could lead to serious repercussions for the non-profit, as hundreds of thousands of digital books have been removed from its library. The Internet Archive is also in the middle of another copyright lawsuit from multiple music labels for digitising vintage records.

What is the Internet Archive?

Based in San Francisco, the Internet Archive is one of the world’s most well-known libraries for scanned copies of millions of physical books that it lends to people all over the globe for free.

The non-profit organisation claims its mission is to provide “universal access to all knowledge” and has been archiving digital content for years such as books, movies, music, software and more.

The archive claims to have more than 20m freely downloadable books and texts, along with a collection of 2.3m modern e-books that can be borrowed – similar to a library. But while supporters say the Internet Archive is a valuable source of easily accessible information, its critics claim it breaches copyright laws.

What caused the major publisher lawsuit?

The Internet Archive let users access its vast digital library for years before the lawsuit began, but a decision during the Covid-19 pandemic prompted the legal response.

Previously, only a limited number of individuals were allowed to borrow a digital book from the non-profit’s Open Library service, a principle that the archive referred to as controlled digital lending.

But this rule was relaxed during the pandemic and led to the creation of the archive’s National Emergency Library, which meant an unlimited number of people could access the same e-books. After this decision, the major publishers launched their lawsuit and the archive went back to its controlled lending practices.

The four publishers – Hachette, Penguin Random House, Wiley, and HarperCollins – said the Internet Archive was conducting copyright infringement through its practices. But the lawsuit went after both library services and had a major impact – in June 2024, the Internet Archive said more than 500,000 books had been removed from its library as a result of the lawsuit.

The non-profit’s founder Brewster Kahle previously said libraries are “under attack at an unprecedented scale”, with a mix of book bans, defunding and “overzealous lawsuits like the one brought against our library”.

From a loss to an appeal

Unfortunately for the digital library, a judge sided in favour of the publishers on 24 March 2023, agreeing with their claims that the Internet Archive’s practices constitutes “wilful digital piracy on an industrial scale” that hurts both writers and publishers.

The archive appealed this decision later that year, but the appeals court determined that it is not “fair use” for a non-profit to scan copyright-protected print books in their entirety and distribute those digital copies online. The appeals court also said there is not enough of a change from a printed copy to a digital one to constitute fair use.

“We conclude that IA’s use of the works is not transformative,” the appeals court said. “IA creates digital copies of the works and distributes those copies to its users in full, for free. Its digital copies do not provide criticism, commentary, or information about the originals.”

The appeals court did disagree with the previous court’s verdict that the Internet Archive’s use of these copyrighted materials is “commercial in nature” and said it is “undisputed that IA is a nonprofit entity and that it distributes its digital books for free”.

What does this mean for the Internet Archive?

The archive’s director of library services Chris Freeland said the non-profit is “disappointed” in the decision by the appeals court and that it is “reviewing the court’s opinion and will continue to defend the rights of libraries to own, lend and preserve books”.

Freeland also shared a link to readers where they can sign an open letter asking publishers to restore access to the 500,000 books removed from the archive’s library.

The loss also presents a bad precedent for the archive’s Great 78 Project, which is focused on the discovery and preservation of 78rpm records. The Internet Archive has been working to digitise millions of these recordings to preserve them, adding that the disks they were recorded onto are made of brittle material and can be easily broken.

“We aim to bring to light the decisions by music collectors over the decades and a digital reference collection of underrepresented artists and genres,” the Internet Archive says on the project page.

“The digitisation will make this less commonly available music accessible to researchers in a format where it can be manipulated and studied without harming the physical artefacts.”

But multiple music labels are suing the Internet Archive for this project and claims it has “wilfully reproduced” thousands of protected sound recordings without copyright authorisation. The music labels are seeking damages of up to $150,000 for each protected sound recording infringed in the lawsuit, which could lead to payments of more than $412m if the court rules against the Internet Archive.

Source: What you need to know about the Internet Archive’s appeal loss

Dutch DPA fines Clearview €30.5 million for violating the GDPR

Clearview AI is back in hot — and expensive — water, with the Dutch Data Protection Authority (DPA) fining the company €30.5 million ($33.6 million) for violating the General Data Protection Regulation (GDPR). The release explains that Clearview created “an illegal database with billions of photos of faces,” including Dutch individuals, and has failed to properly inform people that it’s using their data. In early 2023, Clearview’s CEO claimed the company had 30 billion images.

Clearview must immediately stop all violations or face up to €5.1 million ($5.6 million) in non-compliance penalties. “Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world,” Dutch DPA chairman Aleid Wolfsen stated. “If there is a photo of you on the Internet — and doesn’t that apply to all of us? — then you can end up in the database of Clearview and be tracked.” He adds that facial recognition can help with safety but that “competent authorities” who are “subject to strict conditions” should handle it rather than a commercial company.

The Dutch DPA further states that since Clearview is breaking the law, using it is also illegal. Wolfsen warns that Dutch companies using Clearview could also be subject to “hefty fines.” Clearview didn’t issue an objection to the Dutch DPA’s fine, so it is unable to launch an appeal.

This fine is far from the first time an entity has stood up against Clearview. In 2020, the LAPD banned its use, and the American Civil Liberties Union (ACLU) sued Clearview, with the settlement ending sales of the biometric database to any private companies. Italy and the UK have previously fined Clearview €20 million ($22 million) and £7.55 million ($10 million), respectively, and instructed the company to delete any data of its residents. Earlier this year, the EU also barred Clearview from untargeted face scraping on the internet.

Source: Clearview faces a €30.5 million for violating the GDPR

Proposal to spy on all chat messages back on European political agenda

Europe is going to talk again about the possibility of checking all chat messages of citizens for child abuse. On September 4, a (secret) consultation will take place, says Patrick Breyer , former MEP for the Pirate Party.

A few years ago, the European Commission came up with the plan to monitor all chat messages of citizens. The European Parliament did not like the proposal of the European Commission and came up with its own proposal, which excludes monitoring of end-to-end encrypted services.

At the end of June, EU President Belgium came up with its own version of the proposal. Namely that only the uploading of photos, video and references to them would be checked. This proposal did not get enough votes.

Germany and Poland are the biggest opponents within the EU anyway. The Netherlands, Estonia, Slovenia, the Czech Republic and Austria would abstain from voting, according to Breyer.

A coalition of almost fifty civil society organisations, including the Dutch Offlimits, Bits of Freedom, Vrijschrift.org and ECNL, called on the European Commission in July to withdraw the chat control proposal and focus on measures that really protect children.

Source: Proposal to control chat messages back on European political agenda – Emerce

Guys, stop trying to be Big Brother in the EU – it changes how people behave and not for the better.

Mozilla removes telemetry service Adjust from mobile Firefox versions – it was spying on you secretly it turns out

Mozilla will soon remove its telemetry service Adjust from the Android and iOS versions of browsers Firefox and Firefox Focus. It appeared that the developer was collecting data on the effectiveness of Firefox ad campaigns without disclosing that.

Mozilla, the developers of Firefox, until recently used the telemetry service Adjust to collect data from its Firefox and Firefox Focus apps for both Android and iOS. Through this service, the company collected data on the number of installs of these specific apps following Mozilla’s ad campaigns.

[…]

The company’s actions may also result from previous complaints about the default enabling of ‘privacy-protecting ad metrics’ in Firefox. This option has been enabled by default since the July 9 release of Firefox 128.

The service collects data on how users respond to ads, which is shared aggregated with advertisers. Users can disable this option, however.

Mozilla says it regrets enabling such telemetry but defends the reason for turning it on by default. According to the browser provider, advertisers’ desire for information about the effectiveness of their campaigns is very difficult to escape.

[…]

Source: Mozilla removes telemetry service Adjust from mobile Firefox versions – Techzine Global

Oh dear. And I thought that Mozilla was the privacy friendly option. 2 strikes now.

Australian Regulators Decide To Write A Strongly Worded Letter About Clearview’s Privacy Law Violations, leave it at that

Clearview’s status as an international pariah really hasn’t changed much over the past few years. It may be generating fewer headlines, but nothing’s really changed about the way it does business.

Clearview has spent years scraping the web, compiling as much personal info as possible to couple with the billions of photos it has collected. It sells all of this to whoever wants to buy it. In the US, this means lots and lots of cop shops. Also, in the US, Clearview has mostly avoided running into a lot of legal trouble, other than a couple of lawsuits stemming from violations of certain states’ privacy laws.

Elsewhere in the world, it’s a different story. It has amassed millions in fines and plenty of orders to exit multiple countries immediately. These orders also mandate the removal of photos and other info gathered from accounts of these countries’ residents.

It doesn’t appear Clearview has complied with many of these orders, much less paid any of the fines. Clearview’s argument has always been that it’s a US company and, therefore, isn’t subject to rulings from foreign courts or mandates from foreign governments. It also appears Clearview might not be able to pay these fines if forced to, considering it’s now offering lawsuit plaintiffs shares in the company, rather than actual cash, to fulfill its settlement obligations.

Australia is one of several countries that claimed Clearview routinely violated privacy laws. Australia is also one of several that told Clearview to get out. Clearview’s response to the allegations and mandates delivered by Australian privacy regulators was the standard boilerplate: we don’t have offices in the Australia so we’re not going to comply with your demands.

Perhaps it’s this international stalemate that has prompted the latest bit of unfortunate news on the Clearview-Australia front. The Office of the Australian Information Commissioner (OAIC) has issued a statement that basically says it’s not going to waste any more time and money trying to get Clearview to respect Australia’s privacy laws. (h/t The Conversation)

Before giving up, the OAIC has this to say about its findings:

That determination found that Clearview AI, through its collection of facial images and biometric templates from individuals in Australia using a facial recognition technology, contravened the Privacy Act, and breached several Australian Privacy Principles (APPs) in Schedule 1 of the Act, including by collecting the sensitive information of individuals without consent in breach of APP 3.3 and failing to take reasonable steps to implement practices, procedures and systems to comply with the APPs.

Notably, the determination found that Clearview AI indiscriminately collected images of individuals’ faces from publicly available sources across the internet (including social media) to store in a database on the organisation’s servers. 

This was followed by the directive ordering Clearview to stop doing business in the country and delete any data it held pertaining to Australian residents. The statement notes Clearview’s only responses were a.) challenging the order in court in 2021 and b.) withdrawing entirely from the proceedings two years later. The OAIC notes that nothing appears to have changed in terms of how Clearview handles its collections. It also says it has no reason to believe Clearview has stopped collecting Australian persons’ data.

Despite all of that, it has decided to do absolutely nothing going forward:

Privacy Commissioner Carly Kind said, “I have given extensive consideration to the question of whether the OAIC should invest further resources in scrutinising the actions of Clearview AI, a company that has already been investigated by the OAIC and which has found itself the subject of regulatory investigations in at least three jurisdictions around the world as well as a class action in the United States. Considering all the relevant factors, I am not satisfied that further action is warranted in the particular case of Clearview AI at this time.

That’s disappointing. It makes it clear the company can avoid being held accountable for its legal violations by simply refusing to honor mandates issued by foreign countries or pay any fines levied. It can just continue to be the awful, ethically-horrendous company it has always been because, sooner or later, regulators are just going to give up and move on to softer targets.

[…]

Source: Australian Regulators Decide To Do Absolutely Nothing About Clearview’s Privacy Law Violations | Techdirt

Dutch officials fine Uber €290M for GDPR violations

Privacy authorities in the Netherlands have imposed a €290 million ($324 million) fine on ride-share giant Uber for sending driver data to servers in the United States – “a serious violation” of the EU’s General Data Protection Regulation (GDPR).

According to the Dutch Data Protection Authority (DPA), Uber spent years sending sensitive driver information from Europe to the US. Among the data that was transmitted were taxi licenses, location data, payment details, identity documents, and medical and criminal records. The data was sent abroad without the use of “transfer tools,” which the DPA said means the data wasn’t sufficiently protected.

“Businesses are usually obliged to take additional measures if they store personal data of Europeans outside the European Union,” Dutch DPA chairman Aleid Wolfsen said of the decision. “Uber did not meet the requirements of the GDPR to ensure the level of protection to the data with regard to transfers to the US. That is very serious.”

The Dutch DPA said that the investigation that led to the fine began after complaints from a group of more than 170 French Uber drivers who alleged their data was being sent to the US without adequate protection. Because Uber’s European operations are based in the Netherlands, enforcement for GDPR violations fell to the Dutch DPA.

Unfortunately for Uber, it already has an extensive history with the Dutch DPA, which has fined the outfit twice before.

The first came in 2018 when the authority fined Uber €600,000 for failing to report a data breach (a slugfest that several EU countries joined in on). The latter €10 million fine came earlier this year after Dutch officials determined Uber had failed to disclose data retention practices surrounding the data of EU drivers, refusing to name which countries data was sent to, and had obstructed its drivers’ right to privacy.

[…]

The uncertainty Uber refers to stems from the EU’s striking down of the EU-US Privacy Shield agreement and the years of efforts to replace it with a new rule that defines the safe transfer of personal data between the two regions.

Uber claims it’s done its job under the GDPR to safeguard data belonging to European citizens – it didn’t even need to make any data transfer process changes to comply the latest rules.

[…]

Source: Dutch officials fine Uber €290M for GDPR violations • The Register

Texas AG Latest To Sue GM For Covertly Selling Driver Data To Insurance Companies

Last year Mozilla released a report showcasing how the auto industry has some of the worst privacy practices of any tech industry in America (no small feat). Massive amounts of driver behavior is collected by your car, and even more is hoovered up from your smartphone every time you connect. This data isn’t secured, often isn’t encrypted, and is sold to a long list of dodgy, unregulated middlemen.

Last March the New York Times revealed that automakers like GM routinely sell access to driver behavior data to insurance companies, which then use that data to justify jacking up your rates. The practice isn’t clearly disclosed to consumers, and has resulted in 11 federal lawsuits in less than a month.

Now Texas AG Ken Paxton has belatedly joined the fun, filing suit (press release, complaint) in the state district court of Montgomery County against GM for “false, deceptive, and misleading business practices”:

“Companies are using invasive technology to violate the rights of our citizens in unthinkable ways. Millions of American drivers wanted to buy a car, not a comprehensive surveillance system that unlawfully records information about every drive they take and sells their data to any company willing to pay for it.”

Paxton notes that GM’s tracking impacted 1.8 million Texans and 14 million vehicles, few if any of whom understood they were signing up to be spied on by their vehicle. This is, amazingly enough, the first state lawsuit against an automaker for privacy violations, according to Politico.

The sales pitch for this kind of tracking and sales is that good drivers will be rewarded for more careful driving. But as publicly-traded companies, everybody in this chain — from insurance companies to automakers — are utterly financially desensitized from giving anybody a consistent break for good behavior. That’s just not how it’s going to work. Everybody pays more and more. Always.

But GM and other automakers’ primary problem is they weren’t telling consumers this kind of tracking was even happening in any clear, direct way. Usually it’s buried deep in an unread end user agreement for roadside assistant apps and related services. Those services usually involve a free trial, but the user agreement to data collection sticks around.

[…]

Source: Texas AG Latest To Sue GM For Covertly Selling Driver Data To Insurance Companies | Techdirt

UK Once Again Denies A Passport Over Applicant’s Name Due To Intellectual Property Concerns – again

I can’t believe this, but it happened again. Almost exactly a decade ago, Tim Cushing wrote about a bonkers story out of the UK in which a passport applicant who’s middle name was “Skywalker” was denied the passport due to purported trademark or copyright concerns. The question that ought to immediately leap to mind should be: wait, nothing about a name or its appearance on a passport amounts to either creative expression being copied, nor use in commerce, meaning that neither copyright nor trademark law ought to apply in the slightest.

And you would have thought that coming out of that whole episode, proper guidance would have been given to the UK’s passport office so that this kind of stupidity doesn’t happen again. Unfortunately, it did happen again. A UK woman attempted to get a passport for her daughter, who she named Khaleesi, only to have it refused over the trademark for the Game of Thrones character that held the same fictional title.

Lucy, 39, from Swindon in Wiltshire, said the Passport Office initially refused the application for Khaleesi, six.

Officials said they were unable to issue a passport unless Warner Brothers gave permission because it owned the name’s trademark. But the authority has since apologised for the error.

“I was absolutely devastated, we were so looking forward to our first holiday together,” Lucy said.

While any intellectual property concerns over a passport are absolutely silly, I would argue that trademark law makes even less sense here than copyright would. Again, trademark law is designed specifically to protect the public from being confused as to the source of a good or service in commerce. There is no good or service nor commerce here. Lucy would simply like to take her own child across national borders. That’s it. Lucy had to consult with an attorney due to this insanity, which didn’t initially yield the proper result.

After seeking legal advice, her solicitors discovered that while there is a trademark for Game of Thrones, it is for goods and services – but not for a person’s name.

“That information was sent to the Passport Office who said I would need a letter from Warner Brothers to confirm my daughter is able to use that name,” she said.

This amounts to a restriction on the rights and freedoms of a child in a free country as a result of the choice their parent’s made about their name. Whatever your thoughts on IP laws in general, that simply cannot be the aim of literally any of them.

Now, once the media got a hold of all of this, the Passport Office eventually relented, said it made an error in denying the passport, and has put the application through. But even the government’s explanation doesn’t fully make sense.

Official explained there had been a misunderstanding and the guidance staff had originally given applies only to people changing their names.

“He advised me that they should be able to process my daughter’s passport now, ” she said.

Why would the changing of a name be any different? My name is my name, not a creative expression, nor a use in commerce. If I elect to change my name from “Timothy Geigner” to “Timothy Mickey Mouse Geigner”, none of that equates to an infringement of Disney’s rights, copyright nor trademark. It’s just my name. It would only be if I attempted to use my new name in commerce or as part of an expression that I might run afoul of either trademark or copyright law.

What this really is is the pervasive cancer that is ownership culture. It’s only with ownership culture that you get a passport official somehow thinking that Warner Bros. production of a fantasy show means a six year old can’t get a passport.

Source: UK Once Again Denies A Passport Over Applicant’s Name Due To Intellectual Property Concerns | Techdirt

New U.N. Cybercrime Treaty Could Threaten Human Rights

The United Nations approved its first international cybercrime treaty yesterday. The effort succeeded despite opposition from tech companies and human rights groups, who warn that the agreement will permit countries to expand invasive electronic surveillance in the name of criminal investigations. Experts from these organizations say that the treaty undermines the global human rights of freedom of speech and expression because it contains clauses that countries could interpret to internationally prosecute any perceived crime that takes place on a computer system.

[…]

among the watchdog groups that monitored the meeting closely, the tone was funereal. “The U.N. cybercrime convention is a blank check for surveillance abuses,” says Katitza Rodriguez, the Electronic Frontier Foundation’s (EFF’s) policy director for global privacy. “It can and will be wielded as a tool for systemic rights violations.”

In the coming weeks, the treaty will head to a vote among the General Assembly’s 193 member states. If it’s accepted by a majority there, the treaty will move to the ratification process, in which individual country governments must sign on.

The treaty, called the Comprehensive International Convention on Countering the Use of Information and Communications Technologies for Criminal Purposes, was first devised in 2019, with debates to determine its substance beginning in 2021. It is intended to provide a global legal framework to prevent and respond to cybercrimes.

[…]

experts have expressed that the newly adopted treaty lacks such safeguards for a free Internet. A major concern is that the treaty could be applied to all crimes as long as they involve information and communication technology (ICT) systems. HRW has documented the prosecution of LGBTQ+ people and others who expressed themselves online. This treaty could require countries’ governments to cooperate with other nations that have outlawed LGBTQ+ conduct or digital forms of political protest, for instance.

“This expansive definition effectively means that when governments pass domestic laws that criminalize a broad range of conducts, if it’s committed through an ICT system, they can point to this treaty to justify the enforcement of repressive laws,” said HRW executive director Tirana Hassan in a news briefing late last month.

[…]

“The treaty allows for cross-border surveillance and cooperation to gather evidence for serious crimes, effectively transforming it into a global surveillance network,” Rodriguez says. “This poses a significant risk of cross-border human rights abuses and transnational repression.”

[…]

Source: New U.N. Cybercrime Treaty Could Threaten Human Rights | Scientific American

For a more complete look at the threats presented by this treaty, also see: UN Cybercrime Treaty does not define cybercrime, allows any definition and forces all signatories to secretly surveil their own population on request by any other signatory (think totalitarian states spying on people in democracies with no recourse)

Suno & Udio To RIAA: Your Music Is Copyrighted, You Can’t Copyright Styles

AI music generators Suno and Udio responded to the lawsuits filed by the major recording labels, arguing that their platforms are tools for making new, original music that “didn’t and often couldn’t previously exist.”

“Those genres and styles — the recognizable sounds of opera, or jazz, or rap music — are not something that anyone owns,” the companies said. “Our intellectual property laws have always been carefully calibrated to avoid allowing anyone to monopolize a form of artistic expression, whether a sonnet or a pop song. IP rights can attach to a particular recorded rendition of a song in one of those genres or styles. But not to the genre or style itself.” TorrentFreak reports: “[The labels] frame their concern as one about ‘copies’ of their recordings made in the process of developing the technology — that is, copies never heard or seen by anyone, made solely to analyze the sonic and stylistic patterns of the universe of pre-existing musical expression. But what the major record labels really don’t want is competition.” The labels’ position is that any competition must be legal, and the AI companies state quite clearly that the law permits the use of copyrighted works in these circumstances. Suno and Udio also make it clear that snippets of copyrighted music aren’t stored as a library of pre-existing content in the neural networks of their AI models, “outputting a collage of ‘samples’ stitched together from existing recordings” when prompted by users.

“[The neural networks were] constructed by showing the program tens of millions of instances of different kinds of recordings,” Suno explains. “From analyzing their constitutive elements, the model derived a staggeringly complex collection of statistical insights about the auditory characteristics of those recordings — what types of sounds tend to appear in which kinds of music; what the shape of a pop song tends to look like; how the drum beat typically varies from country to rock to hip-hop; what the guitar tone tends to sound like in those different genres; and so on.” These models are vast stores, not of copyrighted music, the defendants say, but information about what musical styles consist of, and it’s from that information new music is made.

Most copyright lawsuits in the music industry are about reproduction and public distribution of identified copyright works, but that’s certainly not the case here. “The Complaint explicitly disavows any contention that any output ever generated by Udio has infringed their rights. While it includes a variety of examples of outputs that allegedly resemble certain pre-existing songs, the Complaint goes out of its way to say that it is not alleging that those outputs constitute actionable copyright infringement.” With Udio declaring that, as a matter of law, “that key point makes all the difference,” Suno’s conclusion is served raw. “That concession will ultimately prove fatal to Plaintiffs’ claims. It is fair use under copyright law to make a copy of a protected work as part of a back-end technological process, invisible to the public, in the service of creating an ultimately non-infringing new product.” Noting that Congress enacted the first copyright law in 1791, Suno says that in the 233 years since, not a single case has ever reached a contrary conclusion.

In addition to addressing allegations unique to their individual cases, the AI companies accuse the labels of various types of anti-competitive behavior. Imposing conditions to prevent streaming services obtaining licensed music from smaller labels at lower rates, seeking to impose a “no AI” policy on licensees, to claims that they “may have responded to outreach from potential commercial counterparties by engaging in one or more concerted refusals to deal.” The defendants say this type of behavior is fueled by the labels’ dominant control of copyrighted works and by extension, the overall market. Here, however, ownership of copyrighted music is trumped by the existence and knowledge of musical styles, over which nobody can claim ownership or seek to control. “No one owns musical styles. Developing a tool to empower many more people to create music, by scrupulously analyzing what the building blocks of different styles consist of, is a quintessential fair use under longstanding and unbroken copyright doctrine. “Plaintiffs’ contrary vision is fundamentally inconsistent with the law and its underlying values.”
You can read Suno and Udio’s answers to the RIAA’s lawsuits here (PDF) and here (PDF).

Source: Suno & Udio To RIAA: Your Music Is Copyrighted, You Can’t Copyright Styles

Chrome Web Store warns end is coming for uBlock Origin

[…] With the stable release of Chrome 127 on July 23, 2024, the full spectrum of Chrome users could see the warning. One user of the content-blocking add-on filed a GitHub Issue about the notification.

“This extension may soon no longer be supported because it doesn’t follow best practices for Chrome extensions,” the Chrome Web Store (CWS) notification banner explained.

But Google is being too cautious in its language. uBlock Origin (uBO) will stop working entirely when Google Chrome drops support for Manifest v2 – which uBlock Origin and other extensions rely on to do their thing. When Manifest v2 is no longer supported by Chrome, uBlock Origin won’t work at all – that’s what Google should be telling users.

Raymond Hill, the creator and maintainer of uBO, has made it clear that he will not be trying to adapt uBO to Google’s Manifest v3 – the extension architecture that is replacing v2.

“You will have to find an alternative to uBO before Google Chrome disables it for good,” he explained in a list of FAQs for uBlock Origin Lite – a content-blocking extension that functions on the upcoming Manifest v3 system but lacks the ability to create custom filters.

uBlock Origin Lite, he explained, is “not meant as a [Manifest v3]-compliant version of uBO, it’s meant as a reliable Lite version of uBO, suitable for those who used uBO in an install-and-forget manner.”

This is a nuanced statement. He’s not saying that if you move from uBO to uBlock Origin Lite all will be well and exactly the same – just that uBlock Origin Lite works on Manifest v3, so it will continue working after the v2 purge.

This nuance is needed because Manifest v2 provided uBlock Origin and other extensions deep access to sites and pages being visited by the user. It allowed adverts and other stuff to be filtered out as desired, whereas v3 pares back that functionality.

While it’s difficult to generalize about how the experience of uBO under Manifest v2 and uBOL under Manifest v3 will differ, Hill expects uBOL “will be less effective at dealing with” websites that detect and block content blockers, and at “minimizing website breakage” when stuff is filtered out, because existing uBO filters can’t be converted to declarative rules.

[…]

Source: Chrome Web Store warns end is coming for uBlock Origin • The Register

Samsung starts blocking sideloading, so Epic Games pulls Fortnite from the Galaxy Store

After it was discovered that Samsung would begin blocking any attempt to sideload apps Epic Games has made the decision to remove Fortnite, among other titles, from the Galaxy Store.

When the Galaxy Z Fold 6 began to land in the hands of users, the loaded version of One UI touted a brand-new attempt to block unverified apps from being sideloaded. Samsung’s One UI 6.1.1 asks if the user wants to turn on the “Auto Blocker,” a function that will block not only apps from unverified sources but also commands or software updates via USB cable.

Related: Samsung Galaxy phones now stop you from sideloading Android apps by default

Epic Games views this as poor behavior on Samsung’s part, citing it as one reason the company is pulling Fortnite from the Galaxy Store in One UI. A blog post notes that the decision was also made because of “ongoing Google proposals to Samsung to restrain competition in the market for Android app distribution.”

[…]

Source: Epic Games pulls Fortnite from the Galaxy Store

Come on Samsung,  blocking sideloading and USB? Really, one of the advantages of Android is that it is a (more) open system.

US Congress Wants To Let Private Companies Own The Law – set standards you must comply with but can’t actually find or see easily

It sounds absolutely batty that there is a strong, bipartisan push to lock up aspects of our law behind copyright. But it’s happening. Even worse, the push is on to include this effort to lock up the law in the “must pass” National Defense Authorization Act (NDAA). This is the bill that Congress lights up like a Christmas tree with the various bills they know they can’t pass normally, every year.

And this year, they’re pushing the Pro Codes Act, a dangerous bill to lock up the law that has bipartisan support.

[…]

There are lots of standards out there, often developed by industry groups. These standards can be on all sorts of subjects, such as building codes or consumer safety or indicators for hazardous materials. The list goes on and on and on. Indeed, the National Institute of Standards and Technology has a database of over 27,000 such standards that are “included by reference” into law.

This is where things get wonky. Since many of these standards are put together by private organizations (companies, standards bodies, whatever), some of them could qualify for copyright. But, then, lawmakers will often require certain products and services to meet those standards. That is, the laws will “reference” those standards (for example, how to have a building be built in a safe or non-polluting manner).

Many people, myself included, believe that the law must be public. How can the rule of law make any sense at all if the public cannot freely access and read the law? Thus, we believe that when a standard gets “incorporated by reference” into the law, it should become public domain, for the simple fact that the law itself must be public domain.

[…]

Two years ago, there was a pretty big victory, noting that his publishing of standards that are “incorporated by reference” is fair use.

But industry standards bodies hate this, because often a large part of their own revenue stream comes from selling access to the standards they create, including those referenced by laws.

So they lobbied Congress to push this Pro Codes Act, which explicitly says that technical standards incorporated by reference retain copyright. To try to stave off criticism (and to mischaracterize the bill publicly), the law says that standards bodies retain the copyright if the standards body makes the standard available on a free publicly accessible online source.

[…]

They added this last part to head off criticism that the law is “locked up.” They say things like “see, under this law, the law has to be freely available online.”

But that’s missing the point. It still means that the law itself is only available from one source, in one format. And while it has to be “publicly accessible online at no monetary cost,” that does not mean that it has to be publicly accessible in an easy or useful manner. It does not mean that there won’t be limitations on access or usage.

It is locking up the law.

But, because the law says that those standards must be released online free of cost, it allows the supporters of this law, like Issa, to falsely portray the law as “enhancing public access” to the laws.

That’s a lie.

[…]

t flies in the face of the very fundamental concept that “no one can own the law,” as the Supreme Court itself recently said. And to try and shove it into a must pass bill about funding the military is just ridiculously cynical, while demonstrating that its backers know it can’t pass through regular process.

Instead, this is an attempt by Congress to say, yes, some companies do get to own the law, so long as they put up a limited, difficult to use website by which you can see parts of the law.

Library groups and civil society groups are pushing back on this (disclaimer: we signed onto this letter). Please add your voice and tell Congress not to lock up the law.

Source: Congress Wants To Let Private Companies Own The Law | Techdirt

FTC asks 8 big names to explain surveillance pricing tech

The US Federal Trade Commission (FTC) has launched an investigation into “surveillance pricing,” a phenomenon likely familiar to anyone who’s had to buy something in an incognito browser window to avoid paying a premium.

Surveillance pricing, according to the FTC, is the use of algorithms, AI, and other technologies – most crucially combined with personal information about shoppers like location, demographics, credit, the computer used, and browsing/shopping history – “to categorize individuals and set a targeted price for a product or service.”

In other words, the regulator is concerned about the use of software to artificially push up prices for people based on their perceived circumstances, something that incognito mode can counter by more or less cloaking your online identity.

[…]

But don’t mistake this for legal action – at this point it’s all about “helping the FTC better understand the opaque market for [surveillance pricing] products by third-party intermediaries,” the government watchdog said.

“Firms that harvest Americans’ personal data can put people’s privacy at risk,” FTC boss Lina Khan opined. “Now firms could be exploiting this vast trove of personal information to charge people higher prices.”

It’s not exactly a secret that sellers manipulate online prices, or that consumers know about it – recommendations to shop online in an incognito browser window are plentiful and go back years.

In this case, the FTC wants to know more about how Mastercard, JPMorgan Chase, Accenture and McKinsey & Co are offering surveillance pricing products. It also wants the same information from some names you may not have heard of, like Revionics, which offers surveillance pricing services to companies like The Home Depot and Tractor Supply; Task Software, which counts McDonald’s and Starbucks among its customers; PROS, which supports Nestle, DigiKey and others; and Bloomreach, which provides similar services like Williams Sonoma, Total Wine, and Virgin Experience Days.

The FTC wants to probe what types of surveillance pricing products exist, the services they offer, how they’re collecting customer data and where it’s coming from, information about who they offered services to, and what sort of impacts these may have on consumers and the prices they pay.

[…]

Source: FTC asks 8 big names to explain surveillance pricing tech • The Register

Google’s reCAPTCHAv2 is just labor exploitation, boffins say

Google promotes its reCAPTCHA service as a security mechanism for websites, but researchers affiliated with the University of California, Irvine, argue it’s harvesting information while extracting human labor worth billions.

The term CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart,” and, as Google explains, it refers to a challenge-response authentication scheme that presents people with a puzzle or question that a computer cannot solve.

[…]

The utility of reCAPTCHA challenges appears to be significantly diminished in an era when AI models can answer CAPTCHA questions almost as well as humans.

Show me the money

UC Irvine academics contend CAPTCHAs should be binned.

In a paper [PDF] titled “Dazed & Confused: A Large-Scale Real-World User Study of reCAPTCHAv2,” authors Andrew Searles, Renascence Tarafder Prapty, and Gene Tsudik argue that the service should be abandoned because it’s disliked by users, costly in terms of time and datacenter resources, and vulnerable to bots – contrary to its intended purpose.

“I believe reCAPTCHA’s true purpose is to harvest user information and labor from websites,” asserted Andrew Searles, who just completed his PhD and was the paper’s lead author, in an email to The Register.

“If you believe that reCAPTCHA is securing your website, you have been deceived. Additionally, this false sense of security has come with an immense cost of human time and privacy.”

The paper, released in November 2023, notes that even back in 2016 researchers were able to defeat reCAPTCHA v2 image challenges 70 percent of the time. The reCAPTCHA v2 checkbox challenge is even more vulnerable – the researchers claim it can be defeated 100 percent of the time.

reCAPTCHA v3 has fared no better. In 2019, researchers devised a reinforcement learning attack that breaks reCAPTCHAv3’s behavior-based challenges 97 percent of the time.

[…]

The authors’ research findings are based on a study of users conducted over 13 months in 2022 and 2023. Some 9,141 reCAPTCHAv2 sessions were captured from unwitting participants and analyzed, in conjunction with a survey completed by 108 individuals.

Respondents gave the reCAPTCHA v2 checkbox puzzle 78.51 out of 100 on the System Usability Scale, while the image puzzle rated only 58.90. “Results demonstrate that 40 percent of participants found the image version to be annoying (or very annoying), while <10 percent found the checkbox version annoying,” the paper explains.

But when examined in aggregate, reCAPTCHA interactions impose a significant cost – some of which Google captures.

“In terms of cost, we estimate that – during over 13 years of its deployment – 819 million hours of human time has been spent on reCAPTCHA, which corresponds to at least $6.1 billion USD in wages,” the authors state in their paper.

“Traffic resulting from reCAPTCHA consumed 134 petabytes of bandwidth, which translates into about 7.5 million kWhs of energy, corresponding to 7.5 million pounds of CO2. In addition, Google has potentially profited $888 billion from cookies [created by reCAPTCHA sessions] and $8.75–32.3 billion per each sale of their total labeled data set.”

Asked whether the costs Google shifts to reCAPTCHA users in the form of time and effort are unreasonable or exploitive, Searles pointed to the original white paper on CAPTCHAs by Luis von Ahn, Manuel Blum, and John Langford – which includes a section titled “Stealing cycles from humans.”

[…]

As the paper points out, image-labeling challenges have been around since 2004 and by 2010 there were attacks that could beat them 100 percent of the time. Despite this, Google introduced reCAPTCHA v2 with a fall-back image recognition security challenge that had been proven to be insecure four years earlier.

This makes no sense, the authors argue, from a security perspective. But it does make sense if the goal is obtaining image labeling data – the results of users identifying CAPTCHA images – which Google happens to sell as a cloud service.

“The conclusion can be extended that the true purpose of reCAPTCHA v2 is a free image-labeling labor and tracking cookie farm for advertising and data profit masquerading as a security service,” the paper declares.

[…]

Source: Google’s reCAPTCHAv2 is just labor exploitation, boffins say • The Register