Canon is getting away with printers that won’t scan without ink — but HP might pay

Were you hoping Canon might be held accountable for its all-in-one printers that mysteriously can’t scan when they’re low on ink, forcing you to buy more? Tough: the lawsuit we told you about last year quietly ended in a private settlement rather than becoming a big class-action.

I just checked, and a judge already dismissed David Leacraft’s lawsuit in November, without Canon ever being forced to show what happens when you try to scan without a full ink cartridge. (Numerous Canon customer support reps wrote that it simply doesn’t work.)

Here’s the good news: HP, an even larger and more shameless manufacturer of printers, is still possibly facing down a class-action suit for the same practice.

As Reuters reports, a judge has refused to dismiss a lawsuit by Gary Freund and Wayne McMath that alleges many HP printers won’t scan or fax documents when their ink cartridges report that they’ve run low.

[…]

Interestingly, neither Canon nor HP spent any time trying to argue their printers do scan when they’re low on ink in the lawsuit responses I’ve read. Perhaps they can’t deny it? Epson, meanwhile, has an entire FAQ dedicated to reassuring customers that it hasn’t pulled that trick since 2008. (Don’t worry, Epson has other forms of printer enshittification.)

[…]

Source: Canon is getting away with printers that won’t scan sans ink — but HP might pay

Preservation Fail: Hasbro Wants Old ‘Transformers’ Games Re-Released, Except Activision Might Have Lost Them

And here we go again. we’ve been talking about how copyright has gotten in the way of cultural preservation generally for a while, and more specifically lately when it comes to the video game industry. The way this problem manifests itself is quite simple: video game publishers support the games they release for some period of time and then they stop. When they stop, depending on the type of game, it can make that game unavailable for legitimate purchase or use, either because the game is disappeared from retail and online stores, or because the servers needed to make them operational are taken offline. Meanwhile, copyright law prevents individuals and, in some cases, institutions from preserving and making those games available to the public, a la a library or museum would.

When you make these preservation arguments, one of the common retorts you get from the gaming industry and its apologists is that publishers already preserve these games for eventual re-release down the road, which is why they need to maintain their copyright protection on that content. We’ve pointed out failures to do so by the industry in the past, but the story about Hasbro wanting to re-release several older Transformers video games, but can’t, is about as perfect an example as I can find.

Released in June 2010, Transformers: War for Cybertron was a well-received third-person shooter that got an equally great sequel in 2012, Fall of Cybertron. (And then in 2014 we got Rise of Dark Spark, which wasn’t very good and was tied into the live-action films.) What made the first two games so memorable and beloved was that they told their own stories about the origins of popular characters like Megatron and Optimus Prime while featuring kick-ass combat that included the ability to transform into different vehicles. Sadly, in 2018, all of these Activision-published Transformers games (and several it commissioned from other developers) were yanked from digital stores, making them hard to acquire and play in 2023. It seems that Hasbro now wants that to change, suggesting the games could make a perfect fit for Xbox Game Pass, once Activision, uh…finds them.

You read that right: finds them. What does that mean? Well, when Hasbro came calling to Activision looking to see if this was a possibility, it devolved into Activision doing a theatrical production parody called Dude, Where’s My Hard Drive? It seems that these games may or may not exist on some piece of hardware, but Activision literally cannot find it. Or maybe not, as you’ll read below. There seems to be some confusion about what Activision can and cannot find.

And, yes, the mantra in the comments that pirate sites are essentially solving for this problem certainly applies here as well. So much so, in fact, that it sure sounds like Hasbro went that route to get what it needed for the toy design portion of this.

Interestingly, Activision’s lack of organization seems to have caused some headaches for Hasbro’s toy designers who are working on the Gamer Edition figures. The toy company explained that it had to load up the games on their original platforms and play through them to find specific details they wanted to recreate for the toys.

“For World of Cybertron we had to rip it ourselves, because [Activision] could not find it—they kept sending concept art instead, which we didn’t want,” explained Hasbro. “So we booted up an old computer and ripped them all out from there. Which was a learning experience and a long weekend, because we just wanted to get it right, so that’s why we did it like that.

What’s strange is that despite the above, Activision responded to initial reports of all this indicating that the headlines were false and it does have… code. Or something.

Hasbro itself then followed up apologizing for the confusion, also saying that it made an error in stating the games were “lost”. But what’s strange about all that, in addition to the work that Hasbro did circumventing having access to the actual games themselves, is the time delta it took for Activision to respond to all of this.

Activision has yet to confirm if it actually knows where the source code for the games is specifically located. I also would love to know why Activision waited so long to comment (the initial interview was posted on July 28) and why Hasbro claimed to not have access to key assets when developing its toys based on the games.

It’s also strange that Hasbro, which says it wants to put these games on Game Pass, hasn’t done so for years now. If the games aren’t lost, give ‘em to Hasbro, then?

Indeed. If this was all a misunderstanding, so be it. But if this was all pure misunderstanding, the rest of the circumstances surrounding this story don’t make a great deal of sense. At the very least, it sounds like some of the concern that these games could have simply been lost to the world is concerning and yet another data point for an industry that simply needs to do better when it comes to preservation efforts.

Source: Preservation Fail: Hasbro Wants Old ‘Transformers’ Games Re-Released, Except Activision Might Have Lost Them | Techdirt

China floats rules for facial recognition technology – they are good and be great if the govt was bound by them too!

China has released draft regulations to govern the country’s facial recognition technology that include prohibitions on its use to analyze race or ethnicity.

According to the the Cyberspace Administration of China(CAC), the purpose is to “regulate the application of face recognition technology, protect the rights and interests of personal information and other personal and property rights, and maintain social order and public safety” as outlined by a smattering of data security, personal information, and network laws.

The draft rules, which are open for comments until September 7, include some vague directives not to use face recognition technology to disrupt social order, endanger national security, or infringe on the rights of individuals and organizations.

The rules also state that facial recognition tech must be used only when there is a specific purpose and sufficient necessity, strict protection measures are taken, and only when non-biometric measures won’t do.

It makes requirements to obtain consent before processing face information, except for cases where it’s not required, which The Reg assumes means for individuals such as prisoners and in instances of national security. Parental or guardian consent is needed for those under the age of 14.

Building managers can’t require its use to enter and exit property – they must provide alternative measures of verifying a personal identity for those who want it.

It also can’t be leaned into for “major personal interests” such as social assistance and real estate disposal. For that, manual verification of personal identity must be used with facial recognition used only as an auxiliary means of verifying personal identity.

And collecting images for internal management should only be done in a reasonably sized area.

In businesses like hotels, banks, airports, art galleries, and more, the tech should not be used to verify personal identity. If the individual chooses to link their identity to the image, they should be informed either verbally or in writing and provide consent.

Collecting images is also not allowed in private spaces like hotel rooms, public bathrooms, and changing rooms.

Furthermore, those using facial surveillance techniques must display reminder signs, and personal images along with identification information must also be kept confidential, and only anonymized data may be saved.

Under the draft regs, those that store face information of more than 10,000 people must register with a local branch of the CAC within 30 working days.

Most interesting, however, is Article 11, which, when translated from Chinese via automated tools, reads:

No organization or individual shall use face recognition technology to analyze personal race, ethnicity, religion, sensitive personal information such as beliefs, health status, social class, etc.

The CAC does not say if the Chinese Communist Party counts as an “organization.”

Human rights groups have credibly asserted that Uyghurs are routinely surveilled using facial recognition technology, in addition to being incarcerated, required to perform forced labor, re-educated to abandon their beliefs and cultural practices, and may even be subjected to sterilization campaigns.

Just last month, physical security monitoring org IPVM reported it came into possession of a contract between China-based Hikvision and Hainan Province’s Chengmai County for $6 million worth of cameras that could detect whether a person was ethnically Uyghur using minority recognition technology.

Hikvision denied the report and said it last provided such functionality in 2018.

Beyond facilitating identification of Uyghurs, it’s clear the cat is out of the bag when it comes to facial recognition technology in China by both government and businesses alike. Local police use it to track down criminals and its use feeds into China’s social credit system.

“‘Sky Net,’ a facial recognition system that can scan China’s population of about 1.4 billion people in a second, is being used in 16 Chinese cities and provinces to help police crackdown on criminals and improve security,” said state-sponsored media in 2018.

Regardless, the CAC said those violating the new draft rules once passed would be held to criminal and civil liability.

Source: China floats rules for facial recognition technology • The Register

Will Browsers Be Required By Law To Stop You From Visiting Infringing Sites?

Mozilla’s Open Policy & Advocacy blog has news about a worrying proposal from the French government:

In a well-intentioned yet dangerous move to fight online fraud, France is on the verge of forcing browsers to create a dystopian technical capability. Article 6 (para II and III) of the SREN Bill would force browser providers to create the means to mandatorily block websites present on a government provided list.

The post explains why this is an extremely dangerous approach:

A world in which browsers can be forced to incorporate a list of banned websites at the software-level that simply do not open, either in a region or globally, is a worrying prospect that raises serious concerns around freedom of expression. If it successfully passes into law, the precedent this would set would make it much harder for browsers to reject such requests from other governments.

If a capability to block any site on a government blacklist were required by law to be built in to all browsers, then repressive governments would be given an enormously powerful tool. There would be no way around that censorship, short of hacking the browser code. That might be an option for open source coders, but it certainly won’t be for the vast majority of ordinary users. As the Mozilla post points out:

Such a move will overturn decades of established content moderation norms and provide a playbook for authoritarian governments that will easily negate the existence of censorship circumvention tools.

It is even worse than that. If such a capability to block any site were built in to browsers, it’s not just authoritarian governments that would be rubbing their hands with glee: the copyright industry would doubtless push for allegedly infringing sites to be included on the block list too. We know this, because it has already done it in the past, as discussed in Walled Culture the book (free digital versions).

Not many people now remember, but in 2004, BT (British Telecom) caused something of a storm when it created CleanFeed:

British Telecom has taken the unprecedented step of blocking all illegal child pornography websites in a crackdown on abuse online. The decision by Britain’s largest high-speed internet provider will lead to the first mass censorship of the web attempted in a Western democracy.

Here’s how it worked:

Subscribers to British Telecom’s internet services such as BTYahoo and BTInternet who attempt to access illegal sites will receive an error message as if the page was unavailable. BT will register the number of attempts but will not be able to record details of those accessing the sites.

The key justification for what the Guardian called “the first mass censorship of the web attempted in a Western democracy” was that it only blocked illegal child sexual abuse material Web sites. It was therefore an extreme situation requiring an exceptional solution. But seven years later, the copyright industry were able to convince a High Court judge to ignore that justification, and to take advantage of CleanFeed to block a site, Newzbin 2, that had nothing to do with child sexual abuse material, and therefore did not require exceptional solutions:

Justice Arnold ruled that BT must use its blocking technology CleanFeed – which is currently used to prevent access to websites featuring child sexual abuse – to block Newzbin 2.

Exactly the logic used by copyright companies to subvert CleanFeed could be used to co-opt the censorship capabilities of browsers with built-in Web blocking lists. As with CleanFeed, the copyright industry would doubtless argue that since the technology already exists, why not to apply it to tackling copyright infringement too?

That very real threat is another reason to fight this pernicious, misguided French proposal. Because if it is implemented, it will be very hard to stop it becoming yet another technology that the copyright world demands should be bent to its own selfish purposes.

Source: Will Browsers Be Required By Law To Stop You From Visiting Infringing Sites? | Techdirt

Very scary indeed

Academic Book About Emojis Can’t Include The Emojis It Talks About Because Of Copyright

Jieun Kiaer, an Oxford professor of Korean linguistics, recently published an academic book called Emoji Speak: Communications and Behaviours on Social Media. As you can tell from the name, it’s a book about emoji, and about how people communicate with them:

Exploring why and how emojis are born, and the different ways in which people use them, this book highlights the diversity of emoji speak. Presenting the results of empirical investigations with participants of British, Belgian, Chinese, French, Japanese, Jordanian, Korean, Singaporean, and Spanish backgrounds, it raises important questions around the complexity of emoji use.

Though emojis have become ubiquitous, their interpretation can be more challenging. What is humorous in one region, for example, might be considered inappropriate or insulting in another. Whilst emoji use can speed up our communication, we might also question whether they convey our emotions sufficiently. Moreover, far from belonging to the youth, people of all ages now use emoji speak, prompting Kiaer to consider the future of our communication in an increasingly digital world.

Sounds interesting enough, but as Goldman highlights with an image from the book, Kiaer was apparently unable to actually show examples of many of the emoji she was discussing due to copyright fears. While companies like Twitter and Google have offered up their own emoji sets under open licenses, not all of them have, and some of the specifics about the variations in how different companies represent different emoji apparently were key to the book.

So, for those, Kiaer actually hired an artist, Loli Kim, to draw similar emoji!

Note on Images of Emojis

The page reads as follows (with paragraph breaks added for readability):

Notes on Images of Emojis

Social media spaces are almost entirely copyright free. They do not follow the same rules as the offline world. For example, on Twitter you can retweet any tweet and add your own opinion. On Instagram, you can share any post and add stickers or text. On TikTok, you can even ‘duet’ a video to add your own video next to a pre-existing one. As much as each platform has its own rules and regulations, people are able to use and change existing material as they wish. Thinking about copyright brings to light barriers that exist between the online and offline worlds. You can use any emoji in your texts, tweets, posts and videos, but if you want to use them in the offline world, you may encounter a plethora of copyright issues.

In writing this book, I have learnt that online and offline exist upon two very different foundations. I originally planned to have plenty of images of emojis, stickers, and other multi-modal resources featured throughout this book, but I have been unable to for copyright reasons. In this moment, I realized how difficult it is to move emojis from the online world into the offline world.

Even though I am writing this book about emojis and their significance in our lives, I cannot use images of them in even an academic book. Were I writing a tweet or Instagram post, however, I would likely have no problem. Throughout this book, I stress that emoji speak in online spaces is a grassroots movement in which there are no linguistic authorities and corporations have little power to influence which emojis we use. Comparatively, in offline spaces, big corporations take ownership of our emoji speak, much like linguistic authorities dictate how we should write and speak properly.

This sounds like something out of a science fiction story, but it is an important fact of which to be aware. While the boundaries between our online and offline words may be blurring, barriers do still exist between them. For this reason, I have had to use an artist’s interpretation of the images that I originally had in mind for this book. Links to the original images have been provided as endnotes, in case readers would like to see them.

Just… incredible. Now, my first reaction to this is that using the emoji and stickers and whatnot in the book seems like a very clear fair use situation. But… that requires a publisher willing to take up the fight (and an insurance company behind the publisher willing to finance that fight). And, that often doesn’t happen. Publishers are notoriously averse to supporting fair use, because they don’t want to get sued.

But, really, this just ends up highlighting (once again) the absolute ridiculousness of copyright in the modern world. No one in their right mind would think that a book about emoji is somehow harming the market for whatever emoji or stickers the professor wished to include. Yet, due to the nature of copyright, here we are. With an academic book about emoji that can’t even include the emoji being spoken about.

Source: Academic Book About Emojis Can’t Include The Emojis It Talks About Because Of Copyright | Techdirt

Reddit Wins, Doesn’t Have to NARC on Users Who Discussed Torrenting

This weekend, a federal court tossed a subpoena in a case against the internet service provider Grande that would require Reddit to reveal the identities of anonymous users that torrent movies.

The case was originally filed in 2021 by 20 movie producers against Grande Communications in the Western District of Texas federal court. The lawsuit claims that Grande is committing copyright infringement against the producers for allegedly ignoring the torrenting of 45 of their movies that occurred on its networks. As part of the case, the plaintiffs attempted to subpoena Reddit for IP addresses and user data for accounts that openly discussed torrenting on the platform. This weekend, Magistrate Judge Laurel Beeler denied the subpoena—meaning Reddit is off the hook.

“The plaintiffs thus move to compel Reddit to produce the identities of its users who are the subject of the plaintiffs’ subpoena,” Magistrate Judge Beeler wrote in her decision. “The issue is whether that discovery is permissible despite the users’ right to speak anonymously under the First Amendment. The court denies the motion because the plaintiffs have not demonstrated a compelling need for the discovery that outweighs the users’ First Amendment right to anonymous speech.”

Reddit was previously cleared of a similar subpoena in a similar lawsuit by the same judge back in May as reported by ArsTechnica. Reddit was asked to unmask eight users who were active in piracy threads on the platform, but the social media website pulled the same First Amendment defense.

 

Source: Reddit Wins, Doesn’t Have to NARC on Users Who Discussed Torrenting

Judge Seems (Correctly) Skeptical Of AI Copyright Lawsui

Over the last few months there have been a flurry of lawsuits against AI companies, with most of them being focused on copyright claims. The site ChatGPTIsEatingTheWorld has been tracking all the lawsuits, which currently lists 11 lawsuits, seven of which are copyright claims. Five of those are from the same lawyers: Joseph Saveri and Matthew Butterick, who seem to want to corner the market on “suing AI companies for copyright.”

We already covered just how bad their two separate (though they’re currently trying to combine them, and no one can explain to me why it made sense to file them separately in the first place) lawsuits on behalf of authors are, as they show little understanding of how copyright actually works. But their original lawsuit against Stability AI, MidJourney, and DeviantArt was even worse, as we noted back in April. As we said at the time, they don’t allege a single act of infringement, but rather make vague statements about how what these AI tools are doing must be infringing.

(Also, the lawyers seemed to totally misunderstand what DeviantArt was doing, in that it was using open source tools to better enable DeviantArt artists to prevent their works from being used as inspiration in AI systems, and claimed that was infringing… but that’s a different issue).

It appears that the judge overseeing that lawsuit has noticed just how weak the claims are. Though we don’t have a written opinion yet, Reuters reports that Judge William Orrick was pretty clear at least week’s hearing that the case, as currently argued, has no chance.

U.S. District Judge William Orrick said during a hearing in San Francisco on Wednesday that he was inclined to dismiss most of a lawsuit brought by a group of artists against generative artificial intelligence companies, though he would allow them to file a new complaint.

Orrick said that the artists should more clearly state and differentiate their claims against Stability AI, Midjourney and DeviantArt, and that they should be able to “provide more facts” about the alleged copyright infringement because they have access to Stability’s relevant source code.

“Otherwise, it seems implausible that their works are involved,” Orrick said, noting that the systems have been trained on “five billion compressed images.”

Again, the theory of the lawsuit seemed to be that AI companies cut up little pieces of the content they train on and create a “collage” in response. Except, that’s not at all how it works. And since the complaint can’t show any specific work that has been infringed on by the output, the case seems like a loser. And it’s good the judge sees that.

He also recognizes that merely being inspired by someone else’s art doesn’t make the new art infringing:

“I don’t think the claim regarding output images is plausible at the moment, because there’s no substantial similarity” between images created by the artists and the AI systems, Orrick said.

It seems likely that Saveri and crew will file an amended complaint to try to more competently make this argument, but since the underlying technology doesn’t fundamentally do what the lawsuit pretends it does, it’s difficult to see how it can succeed.

But, of course, this is copyright, and copyright caselaw doesn’t always follow logic or what the law itself says. So it’s no surprise that Saveri and Butterick are trying multiple lawsuits with these theories. They might just find a judge confused enough to buy it.

Source: Judge Seems (Correctly) Skeptical Of AI Copyright Lawsuit | Techdirt

Italians turn telco regulator into internet spy, judge and jury, puts 25% of population as criminals

Italy’s brand new anti-piracy law has just received full approval from telecoms regulator AGCOM. In a statement issued Thursday, AGCOM noted its position “at the forefront of the European scene in combating online piracy.” The new law comes into force on August 8 and authorizes nationwide ISP blocking of live events and enables the state to issue fines of up to 5,000 euros to users of pirate streams .

Unanimously approved by the Chamber of Deputies back in March and then unanimously approved by the Senate earlier this month, Italy’s new anti-piracy law has just been unanimously approved by telecoms regulator AGCOM.

In a statement published Thursday, AGCOM welcomed the amendments to Online Copyright Enforcement regulation 680/13/CONS, which concern measures to counter the illegal distribution of live sports streams, as laid out in Resolution 189/23/CONS.

The new provisions grant AGCOM the power to issue “dynamic injunctions” against online service providers of all kinds, a privilege usually reserved for judges in Europe’s highest courts. The aim is to streamline blocking measures against unlicensed IPTV services, with the goal of rendering them inaccessible across all of Italy.

“With such measures, it will be possible to disable access to pirated content in the first 30 minutes of the event broadcast by blocking DNS resolution of domain names and blocking the routing of network traffic to IP addresses uniquely intended for illicit activities,” AGCOM says.

[…]

Penalties For Challenging AGCOM’s New Powers

When AGCOM issues blocking instructions to service providers, their details will be passed to the Public Prosecutor’s Office at the Court of Rome.

After carrying out AGCOM’s instructions, those providers will be required to send a report “without delay” to the Public Prosecutor’s Office. It must detail “all activities carried out in fulfillment of the aforementioned measures” along with “any existing data or information in their possession that may allow for the identification of the providers of the content disseminated abusively.”

In other words, ISPs will be expected to block pirates and gather intelligence on the way. Failure to comply with the instructions of AGCOM will result in a sanction as laid out in LEGGE 31 luglio 1997, n. 249 (Law 249 of July 31, 1997); an administrative fine of 20 million lira to 500 million lira, or in today’s currency – €10,620 to €265,000.

Those involved in the supply/distribution of infringing streams will now face up to three years in prison and a fine of up to €15,000. That’s just €5,000 higher than the minimum punishment intermediaries risk should they fail to follow blocking instructions. Notably, it’s still €250,000 less than the maximum fine a service provider could face if they fail to block piracy carried out by actual pirates.

Watch Pirate Streams? There’s a Fine For That

Unlike the United States where simply consuming pirated streams probably isn’t illegal, in 2017 the Court of Justice of the European Union confirmed that consuming illicit streams in the EU runs contrary to law.

With new deterrents in place against operators of pirate services and otherwise innocent online service providers, Italy has a new deterrent for people who consume pirated streams. From August 8, 2023, they risk a fine of up to €5,000. At least on paper, that has the potential to become quite interesting.

IPSOS research carried out in Italy over the past few years found that roughly 25% of the adult population consume pirate IPTV streams to some extent during a year.

Italy has a population of around 59 million so even with some aggressive rounding that’s still a few million potential pirates. How evidence of this offense can be obtained and then attributed to an individual is unclear.

[…]

Source: Italian Pirate IPTV Customers Risk a 5,000 Euro Fine Starting August 8, 2023 * TorrentFreak

So they will be able to inspect your internet usage, block VPNs, kick you off internet, fine and imprison you – all with no recourse!

That Which Copyright Destroys, ‘Pirates’ Can Save

There’s an interesting post on TorrentFreak that concerns so-called “pirate” subtitles for films. It’s absurd that anyone could consider subtitles to be piracy in any way. They are a good example of how ordinary people can add value by generously helping others enjoy films and TV programs in languages they don’t understand. In no sense do “pirate” subtitles “steal” from those films and programs, they manifestly enhance them. And yet the ownership-obsessed copyright world actively pursues people who dare to spread joy in this way. In discussing these subtitles, TorrentFreak mentions a site that I’ve not heard of before, Karagarga:

an illustrious BitTorrent tracker that’s been around for more than 18 years. Becoming a member of the private community isn’t easy but those inside gain access to a wealth of film obscurities.

The site focuses on archiving rare classic and cult movies, as well as other film-related content. Blockbusters and other popular Hollywood releases can’t be found on the site as uploading them is strictly forbidden.

TorrentFreak links to an article about Karagarga published some years ago by the Canadian newspaper National Post. Here’s a key point it makes:

It’s difficult to overstate the significance of such a resource. Movies of unflagging historical merit are otherwise lost to changes in technology and time every year: film prints are damaged or lost, musty VHS tapes aren’t upgraded, DVDs fall out of print without reissue, back catalogues never make the transition to digital. But should even a single copy of the film exist, however tenuously, it can survive on Karagarga: one person uploads a rarity and dozens more continue to share.

Although that mentions things like film prints being lost, or back catalogues that aren’t converted to digital formats, the underlying cause of films being lost is copyright. It is copyright that prevents people from making backups of films, whether analogue or digital. Even though people are painfully aware of the vulnerability of films that exist in a few copies or even just one copy, it is generally illegal for them to do anything about it, because of copyright. Instead, they must often sit by as cinematic masterpieces are lost forever.

Unless, of course, sites like Karagarga make unauthorized digital copies. It’s a great demonstration of the fact that copyright, far from preserving culture, often leads to its permanent loss. And that supposedly “evil” sites like Karagarga are the ones that save it for posterity.

Source: That Which Copyright Destroys, ‘Pirates’ Can Save | Techdirt

Paris 2024 Olympics: Concern over French plan for AI surveillance

Under a recent law, police will be able to use CCTV algorithms to pick up anomalies such as crowd rushes, fights or unattended bags.

The law explicitly rules out using facial recognition technology, as adopted by China, for example, in order to trace “suspicious” individuals.

But opponents say it is a thin end of the wedge. Even though the experimental period allowed by the law ends in March 2025, they fear the French government’s real aim is to make the new security provisions permanent.

“We’ve seen this before at previous Olympic Games like in Japan, Brazil and Greece. What were supposed to be special security arrangements for the special circumstances of the games, ended up being normalised,” says Noémie Levain, of the digital rights campaign group La Quadrature du Net (Squaring the Web).

[…]

“We will not – and cannot by law – provide facial recognition, so this is a wholly different operation from what you see in China,” he says.

“What makes us attractive is that we provide security, but within the framework of the law and ethics.”

But according to digital rights activist Noémie Levain, this is only a “narrative” that developers are using to sell their product – knowing full well that the government will almost certainly favour French companies over foreign firms when it comes to awarding the Olympics contracts.

“They say it makes all the difference that here there will be no facial recognition. We say it is essentially the same,” she says.

“AI video monitoring is a surveillance tool which allows the state to analyse our bodies, our behaviour, and decide whether it is normal or suspicious. Even without facial recognition, it enables mass control.

“We see it as just as scary as what is happening in China. It’s the same principle of losing the right to be anonymous, the right to act how we want to act in public, the right not to be watched.”

Source: Paris 2024 Olympics: Concern over French plan for AI surveillance – BBC News

‘Taco Tuesday’ is no longer a copyrighted phrase – wait you can copyright a phrase like that?!

Taco Bell succeeded in its petition to remove the “Taco Tuesday” trademark held by Taco John’s, claiming it held an unfair monopoly over the phrase. Taco John’s CEO Jim Creel backed down from the fight on Tuesday, saying it isn’t worth the legal fees to retain the regional chain’s trademark.

“We’ve always prided ourselves on being the home of Taco Tuesday, but paying millions of dollars to lawyers to defend our mark just doesn’t feel like the right thing to do,” Taco John’s CEO Jim Creel said in a statement to CNN.

Taco John’s adopted the “Taco Tuesday” slogan back in the early 1980s as a two-for-one deal, labeling the promotion as “Taco Twosday” in an effort to ramp up sales. The company trademarked the term in 1989 and owned the right to the phrase in all states with the exception of New Jersey where Gregory’s Restaurant & Tavern beat out Taco John’s by trademarking the term in 1982.

Three decades later, Taco John’s finally received pushback when Taco Bell filed a petition with the U.S. Patent and Trademark Office in May to cancel the trademark, saying any restaurant should be able to use “Taco Tuesday.”

[…]

Source: ‘Taco Tuesday’ Has Been Liberated From Its Corporate Overlords

If you think about it, the ability to copyright 2 common words following each other doesn’t make sense at all really. In any 2 word combination, there must have been prior common use.

A Bunch Of Authors Sue OpenAI Claiming Copyright Infringement, Because They Don’t Understand Copyright

You may have seen some headlines recently about some authors filing lawsuits against OpenAI. The lawsuits (plural, though I’m confused why it’s separate attempts at filing a class action lawsuit, rather than a single one) began last week, when authors Paul Tremblay and Mona Awad sued OpenAI and various subsidiaries, claiming copyright infringement in how OpenAI trained its models. They got a lot more attention over the weekend when another class action lawsuit was filed against OpenAI with comedian Sarah Silverman as the lead plaintiff, along with Christopher Golden and Richard Kadrey. The same day the same three plaintiffs (though with Kadrey now listed as the top plaintiff) also sued Meta, though the complaint is basically the same.

All three cases were filed by Joseph Saveri, a plaintiffs class action lawyer who specializes in antitrust litigation. As with all too many class action lawyers, the goal is generally enriching the class action lawyers, rather than actually stopping any actual wrong. Saveri is not a copyright expert, and the lawsuits… show that. There are a ton of assumptions about how Saveri seems to think copyright law works, which is entirely inconsistent with how it actually works.

The complaints are basically all the same, and what it comes down to is the argument that AI systems were trained on copyright-covered material (duh) and that somehow violates their copyrights.

Much of the material in OpenAI’s training datasets, however, comes from copyrighted works—including books written by Plaintiffs—that were copied by OpenAI without consent, without credit, and without compensation

But… this is both wrong and not quite how copyright law works. Training an LLM does not require “copying” the work in question, but rather reading it. To some extent, this lawsuit is basically arguing that merely reading a copyright-covered work is, itself, copyright infringement.

Under this definition, all search engines would be copyright infringing, because effectively they’re doing the same thing: scanning web pages and learning from what they find to build an index. But we’ve already had courts say that’s not even remotely true. If the courts have decided that search engines scanning content on the web to build an index is clearly transformative fair use, so to would be scanning internet content for training an LLM. Arguably the latter case is way more transformative.

And this is the way it should be, because otherwise, it would basically be saying that anyone reading a work by someone else, and then being inspired to create something new would be infringing on the works they were inspired by. I recognize that the Blurred Lines case sorta went in the opposite direction when it came to music, but more recent decisions have really chipped away at Blurred Lines, and even the recording industry (the recording industry!) is arguing that the Blurred Lines case extended copyright too far.

But, if you look at the details of these lawsuits, they’re not arguing any actual copying (which, you know, is kind of important for their to be copyright infringement), but just that the LLMs have learned from the works of the authors who are suing. The evidence there is, well… extraordinarily weak.

For example, in the Tremblay case, they asked ChatGPT to “summarize” his book “The Cabin at the End of the World,” and ChatGPT does so. They do the same in the Silverman case, with her book “The Bedwetter.” If those are infringing, so is every book report by every schoolchild ever. That’s just not how copyright law works.

The lawsuit tries one other tactic here to argue infringement, beyond just “the LLMs read our books.” It also claims that the corpus of data used to train the LLMs was itself infringing.

For instance, in its June 2018 paper introducing GPT-1 (called “Improving Language Understanding by Generative Pre-Training”), OpenAI revealed that it trained GPT-1 on BookCorpus, a collection of “over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance.” OpenAI confirmed why a dataset of books was so valuable: “Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information.” Hundreds of large language models have been trained on BookCorpus, including those made by OpenAI, Google, Amazon, and others.

BookCorpus, however, is a controversial dataset. It was assembled in 2015 by a team of AI researchers for the purpose of training language models. They copied the books from a website called Smashwords that hosts self-published novels, that are available to readers at no cost. Those novels, however, are largely under copyright. They were copied into the BookCorpus dataset without consent, credit, or compensation to the authors.

If that’s the case, then they could make the argument that BookCorpus itself is infringing on copyright (though, again, I’d argue there’s a very strong fair use claim under the Perfect 10 cases), but that’s separate from the question of whether or not training on that data is infringing.

And that’s also true of the other claims of secret pirated copies of books that the complaint insists OpenAI must have relied on:

As noted in Paragraph 32, supra, the OpenAI Books2 dataset can be estimated to contain about 294,000 titles. The only “internet-based books corpora” that have ever offered that much material are notorious “shadow library” websites like Library Genesis (aka LibGen), Z-Library (aka Bok), Sci-Hub, and Bibliotik. The books aggregated by these websites have also been available in bulk via torrent systems. These flagrantly illegal shadow libraries have long been of interest to the AI-training community: for instance, an AI training dataset published in December 2020 by EleutherAI called “Books3” includes a recreation of the Bibliotik collection and contains nearly 200,000 books. On information and belief, the OpenAI Books2 dataset includes books copied from these “shadow libraries,” because those are the most sources of trainable books most similar in nature and size to OpenAI’s description of Books2.

Again, think of the implications if this is copyright infringement. If a musician were inspired to create music in a certain genre after hearing pirated songs in that genre, would that make the songs they created infringing? No one thinks that makes sense except the most extreme copyright maximalists. But that’s not how the law actually works.

This entire line of cases is just based on a total and complete misunderstanding of copyright law. I completely understand that many creative folks are worried and scared about AI, and in particular that it was trained on their works, and can often (if imperfectly) create works inspired by them. But… that’s also how human creativity works.

Humans read, listen, watch, learn from, and are inspired by those who came before them. And then they synthesize that with other things, and create new works, often seeking to emulate the styles of those they learned from. AI systems and LLMs are doing the same thing. It’s not infringing to learn from and be inspired by the works of others. It’s not infringing to write a book report style summary of the works of others.

I understand the emotional appeal of these kinds of lawsuits, but the legal reality is that these cases seem doomed to fail, and possibly in a way that will leave the plaintiffs having to pay legal fees (since in copyright legal fee awards are much more common).

That said, if we’ve learned anything at all in the past two plus decades of lawsuits about copyright and the internet, courts will sometimes bend over backwards to rewrite copyright law to pretend it says what they want it to say, rather than what it does say. If that happens here, however, it would be a huge loss to human creativity.

Source: A Bunch Of Authors Sue OpenAI Claiming Copyright Infringement, Because They Don’t Understand Copyright | Techdirt

Hollywood studios proposed AI contract that would give them likeness rights ‘for the rest of eternity’

During today’s press conference in which Hollywood actors confirmed that they were going on strike, Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, revealed a proposal from Hollywood studios that sounds ripped right out of a Black Mirror episode.

In a statement about the strike, the Alliance of Motion Picture and Television Producers (AMPTP) said that its proposal included “a groundbreaking AI proposal that protects actors’ digital likenesses for SAG-AFTRA members.”

“If you think that’s a groundbreaking proposal, I suggest you think again.”

When asked about the proposal during the press conference, Crabtree-Ireland said that “This ‘groundbreaking’ AI proposal that they gave us yesterday, they proposed that our background performers should be able to be scanned, get one day’s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation. So if you think that’s a groundbreaking proposal, I suggest you think again.”

In response, AMPTP spokesperson Scott Rowe sent out a statement denying the claims made during SAG-AFTRA’s press conference. “The claim made today by SAG-AFTRA leadership that the digital replicas of background actors may be used in perpetuity with no consent or compensation is false. In fact, the current AMPTP proposal only permits a company to use the digital replica of a background actor in the motion picture for which the background actor is employed. Any other use requires the background actor’s consent and bargaining for the use, subject to a minimum payment.”

The use of generative AI has been one of the major sticking points in negotiations between the two sides (it’s also a major issue behind the writers strike), and in her opening statement of the press conference, SAG-AFTRA president Fran Drescher said that “If we don’t stand tall right now, we are all going to be in trouble, we are all going to be in jeopardy of being replaced by machines.”

Source: Hollywood studios proposed AI contract that would give them likeness rights ‘for the rest of eternity’ – The Verge

New privacy deal allows US tech giants to continue storing European user data on American servers

Nearly three years after a 2020 court decision threatened to grind transatlantic e-commerce to a halt, the European Union has adopted a plan that will allow US tech giants to continue storing data about European users on American soil. In a decision announced Monday, the European Commission approved the Trans-Atlantic Data Privacy Framework. Under the terms of the deal, the US will establish a court Europeans can engage with if they feel a US tech platform violated their data privacy rights. President Joe Biden announced the creation of the Data Protection Review Court in an executive order he signed last fall. The court can order the deletion of user data and impose other remedial measures. The framework also limits access to European user data by US intelligence agencies.

The Trans-Atlantic Data Privacy Framework is the latest chapter in a saga that is now more than a decade in the making. It was only earlier this year the EU fined Meta a record-breaking €1.2 billion after it found that Facebook’s practice of moving EU user data to US servers violated the bloc’s digital privacy laws. The EU also ordered Meta to delete the data it already had stored on its US servers if the company didn’t have a legal way to keep that information there by the fall. As The Wall Street Journal notes, Monday’s agreement should allow Meta to avoid the need to delete any data, but the company may end up still paying the fine.

Even with a new agreement in place, it probably won’t be smooth sailing just yet for the companies that depend the most on cross-border data flows. Max Schrems, the lawyer who successfully challenged the previous Safe Harbor and Privacy Shield agreements that governed transatlantic data transfers before today, told The Journal he plans to challenge the new framework. “We would need changes in US surveillance law to make this work and we simply don’t have it,” he said. For what it’s worth, the European Commission says it’s confident it can defend its new framework in court.

Source: New privacy deal allows US tech giants to continue storing European user data on American servers | Engadget

Another problem is that the US side is not enshrined in law, but in a presidential decree, which can be revoked at any time.

Google Says It’ll Scrape Everything You Post Online for AI

Google updated its privacy policy over the weekend, explicitly saying the company reserves the right to scrape just about everything you post online to build its AI tools. If Google can read your words, assume they belong to the company now, and expect that they’re nesting somewhere in the bowels of a chatbot.

“Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public,” the new Google policy says. “For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”

Fortunately for history fans, Google maintains a history of changes to its terms of service. The new language amends an existing policy, spelling out new ways your online musings might be used for the tech giant’s AI tools work.

[…]

This is an unusual clause for a privacy policy. Typically, these policies describe ways that a business uses the information that you post on the company’s own services. Here, it seems Google reserves the right to harvest and harness data posted on any part of the public web, as if the whole internet is the company’s own AI playground. Google did not immediately respond to a request for comment.

[…]

Source: Google Says It’ll Scrape Everything You Post Online for AI

The rest of the article goes into Gizomodo’s luddite War Against AI ™ luddite language, unfortunately, because it misses the point that basically this is nothing much new – Google has been able to use any information you type into any of their products for pretty much any purpose (eg advertising, email scanning, etc) for decades (which is why I don’t use Chrome). However it is something that most people simply don’t realise.

Film companies demand names of Reddit users who discussed piracy in 201

Reddit is fighting another attempt by film companies to unmask anonymous Reddit users who discussed piracy.

The same companies lost a previous, similar motion to identify Reddit users who wrote comments in piracy-related threads. Reddit avoided revealing the identities of eight users by arguing that the First Amendment protected their right to anonymous speech.

Reddit is seeking a similar outcome in the new case, in which the film companies’ subpoena to Reddit sought “Basic account information including IP address registration and logs from 1/1/2016 to present, name, email address and other account registration information” for six users who wrote comments on Reddit threads in 2011 and 2018.

[…]

Film companies, including Bodyguard Productions and Millennium, are behind both lawsuits. In the first case, they sued Internet provider RCN for allegedly ignoring piracy on its broadband network. They sued Grande in the second case. Both RCN and Grande are owned by Astound Broadband.

Reddit is a non-party in both copyright infringement cases filed against the Astound-owned ISPs, but was served with subpoenas demanding information on Reddit users. When Reddit refused to provide all the requested information in both cases, the film companies filed motions to compel Reddit to respond to the subpoenas in US District Court for the Northern District of California.

[…]

Reddit’s response to the latest motion to compel, which was previously reported by TorrentFreak today, said the film companies “have already obtained from Grande identifying information for 118 of Grande’s ‘top 125 pirating IP addresses.’ That concession dooms the Motion; Plaintiffs cannot possibly establish that unmasking these six Reddit users is the only way for Plaintiffs to generate evidence necessary for their claims when they have already succeeded in pursuing an alternative and better way.”

The evidence obtained directly from Grande is “far better than what they could obtain from Reddit,” Reddit said, adding that plaintiffs can subpoena the 118 subscribers that are known to have engaged in copyright infringement instead.

Reddit said the six users whose identities are being sought “posted generally about using Grande to torrent. These six Reddit users responded to two threads in a subreddit for the city of Austin, Texas. The majority of the users posted over 12 years ago while the remaining two posted five years ago.”
[…]

Source: Film companies demand names of Reddit users who discussed piracy in 2011 | Ars Technica

Sacramento Sheriff is sharing license plate reader data with anti-abortion states, records show

In 2015, Democratic Elk Grove Assemblyman Jim Cooper voted for Senate Bill 34, which restricted law enforcement from sharing automated license plate reader (ALPR) data with out-of-state authorities. In 2023, now-Sacramento County Sheriff Cooper appears to be doing just that.

The Electronic Frontier Foundation (EFF) a digital rights group, has sent Cooper a letter requesting that the Sacramento County Sheriff’s Office cease sharing ALPR data with out-of-state agencies that could use it to prosecute someone for seeking an abortion.

According to documents that the Sheriff’s Office provided EFF through a public records request, it has shared license plate reader data with law enforcement agencies in states that have passed laws banning abortion, including Alabama, Oklahoma and Texas.

[…]

Schwartz said that a sheriff in Texas, Idaho or any other state with an abortion ban on the books could use that data to track people’s movements around California, knowing where they live, where they work and where they seek reproductive medical care, including abortions.

The Sacramento County Sheriff’s Office isn’t the only one sharing that data; in May, EFF released a report showing that 71 law enforcement agencies in 22 California counties — including Sacramento County — were sharing such data. The practice is in violation of a 2015 law that states “a (California law enforcement) agency shall not sell, share, or transfer ALPR information, except to another (California law enforcement) agency, and only as otherwise permitted by law.”

[…]

 

Source: Sacramento Sheriff is sharing license plate reader data with anti-abortion states, records show

Comedian, novelists sue OpenAI for reading books. Maybe we should sue people for reading them as well?

Award-winning novelists Paul Tremblay and Mona Awad, and, separately comedian Sarah Silverman and novelists Christopher Golden and Richard Kadrey, have sued OpenAI and accused the startup of training ChatGPT on their books without consent, violating copyright laws.

The lawsuits, both filed in the Northern District Court of San Francisco, say ChatGPT generates accurate summaries of their books and highlighted this as evidence for the software being trained on their work.

[…]

In the second suit, Silverman et al [PDF], make similar claims.

[…]

OpenAI trains its large language models by scraping text from the internet, and although it hasn’t revealed exactly what resources it has swallowed up, the startup has admitted to training its systems on hundreds of thousands of books protected by copyright, and stored on websites like Sci-Hub or Bibliotik.

[…]

Source: Comedian, novelists sue OpenAI for scraping books • The Register

The problem is though, that people read books too. And they can (and do) create accurate summaries from them. What is worse, is that the creativity shown by people can be shown to be influenced by the books, art, dance, etc that they have ingested. So maybe people should be banned from reading books as well under copyright?

Amazon claims it isn’t a “Very Large Online Platform” to evade EU rules

Amazon doesn’t want to comply with Europe’s Digital Services Act, and to avoid the rules the company is arguing that it doesn’t meet the definition of a Very Large Online Platform under EU law. Amazon filed an appeal at the EU General Court to challenge the European Commission decision that Amazon meets the criteria and must comply with the new regulations.

“We agree with the EC’s objective and are committed to protecting customers from illegal products and content, but Amazon doesn’t fit this description of a ‘Very Large Online Platform’ (VLOP) under the DSA and therefore should not be designated as such,” Amazon said in a statement provided to Ars today.

[…]

Amazon argued that the new law is supposed to “address systemic risks posed by very large companies with advertising as their primary revenue and that distribute speech and information,” and not businesses that are primarily retail-based. “The vast majority of our revenue comes from our retail business,” Amazon said.

Amazon claims to be “unfairly singled out”

Amazon also claims it’s unfair that some retailers with larger businesses in individual countries weren’t on the list of 19 companies that must comply with the Digital Services Act. The rules only designate platforms with over 45 million active users in the EU as of February 17.

Amazon said it is “not the largest retailer in any of the EU countries where we operate, and none of these largest retailers in each European country has been designated as a VLOP. If the VLOP designation were to be applied to Amazon and not to other large retailers across the EU, Amazon would be unfairly singled out and forced to meet onerous administrative obligations that don’t benefit EU consumers.”

Those other companies Amazon referred to include Poland’s Allegro or the Dutch Bol.com, according to a Bloomberg report. Neither of those platforms appears to have at least 45 million active users.

[…]

In April, Europe announced its designation of 19 large online platforms, which are mostly US-based companies. Five are run by Google, specifically YouTube, Google Search, the Google Play app and digital media store, Google Maps, and Google Shopping. Meta-owned Facebook and Instagram are on the list, as are Amazon’s online store, Apple’s App Store, Microsoft’s Bing search engine, TikTok, Twitter, and Wikipedia.

Listed platforms also include Alibaba AliExpress, Booking.com, LinkedIn, Pinterest, and Snapchat. The other platform is German online retailer Zalando, which was the first company to sue the EC in an attempt to get removed from the list.

Companies have until August 25 to comply and could face fines of up to 6 percent of their annual revenue if they don’t. Companies will have to submit annual risk assessments and risk mitigation plans that are subject to independent audits and oversight by the European Commission.

“Platforms will have to identify, analyze and mitigate a wide array of systemic risks ranging from how illegal content and disinformation can be amplified on their services, to the impact on the freedom of expression and media freedom,” the EC said in April. “Similarly, specific risks around gender-based violence online and the protection of minors online and their mental health must be assessed and mitigated.” One new rule bans advertisements that target users based on sensitive data such as ethnic origin, political opinions, or sexual orientation.

The EC also said that users must be given “clear information on why they are recommended certain information and will have the right to opt-out from recommendation systems based on profiling.” Users must have the ability “to report illegal content easily and platforms have to process such reports diligently.” Amazon and the other platforms must also “provide an easily understandable, plain-language summary of their terms and conditions, in the languages of the Member States where they operate.”

[…]

 

Source: Amazon claims it isn’t a “Very Large Online Platform” to evade EU rules | Ars Technica

Poor poor Amazon – the spy company monopolist marketplace that rips off the retailers in it’s own market!

An Alarming 87 Percent Of Retro Games Are Being Lost To Time

[…] The Video Game History Foundation (VGHF) partnered with the Software Preservation Network, an organization intent on advancing software preservation through collective action, to release a report on the disappearance of classic video games. “Classic” in this case has been defined as all games released before 2010, which the VGHF noted is the “year when digital game distribution started to take off.”

The status of physical video games

In the study, the two groups found that 87 percent of these classic games are not in release and considered critically endangered due to their widespread unavailability.

[…]

“For accessing nearly 9 in 10 classic games, there are few options: Seek out and maintain vintage collectible games and hardware, travel across the country to visit a library, or… piracy,” VGHF co-director Kelsey Lewin wrote.

[…]

the study claims that just 13 percent of game history is archived in libraries right now. And that’s part of the dilemma here. According to a March 2023 Ars Technica report, laws around the Digital Millennium Copyright Act (DMCA) largely prevent folks from making and distributing copies of any DRM-protected digital work. While the U.S. Copyright Office has issued exemptions to those rules so that libraries and researchers can archive digital material, video games are explicitly left out, which makes it nigh impossible for anyone to effectively study game history.

“Imagine if the only way to watch Titanic was to find a used VHS tape, and maintain your own vintage equipment so that you could still watch it,” Lewin wrote. “And what if no library, not even the Library of Congress, could do any better—they could keep and digitize that VHS of Titanic, but you’d have to go all the way there to watch it.

[…]

Though not surprised, she was still alarmed by the “flimsy” ways in which games disappear, pointing to Antstream Arcade, which houses a plethora of games from the Commodore 64 to the Game Boy that could be lost to time should it close up shop. The Nintendo eShop is a more mainstream example.

“When the eShop shut down the availability of the Game Boy library, [the number of available Game Boy games] went from something like 11 percent to 4.5 percent,” Lewin said. “The company wiped out half of the availability of the library of Game Boy games just by shutting down the Nintendo eShop.

[…]

Lewin noted that although libraries are allowed to do a lot of things “by being libraries [and] preservation institutions,” the Entertainment Software Association (ESA) has consistently lobbied against game preservation efforts such as copyright permissions and allowing the rental of digital video games.

“The ESA has basically opposed all of these new proposed exemptions,” Lewin said. “They’ve just been like, ‘No, that will hurt our bottom line,’ or, ‘That will hurt the industry’s bottom line.’ The ESA also says the industry is doing plenty to keep classic games in release, pointing to this thriving reissue market. And that’s true; there is a thriving reissue market. It’s just that it only covers 13 percent of video games, and that’s not likely to get any better any time soon.”

Read More: As More Games Disappear Forever, John Carmack Has Some Great Advice About Preservation

The study will be used in a 2024 copyright hearing to ask for exemptions for games. Lewin said she’s hopeful that progress will be made, suggesting that, should the hearing go well, games could be available on digital library apps like Libby. You can read the full 50-page study on the open repository Zenodo.

Source: An Alarming 87 Percent Of Retro Games Are Being Lost To Time

France Allows Police to Remotely Turn On GPS, Camera, Audio on Phones

Amidst ongoing protests in France, the country has just passed a new bill that will allow police to remotely access suspects’ cameras, microphones, and GPS on cell phones and other devices.

As reported by Le Monde, the bill has been criticized by the French people as a “snoopers” charter that allows police unfettered access to the location of its citizens. Moreover, police can activate cameras and microphones to take video and audio recordings of suspects. The bill will reportedly only apply to suspects in crimes that are punishable by a minimum of five years in jail

[…]

French politicians added an amendment that orders judge approval for any surveillance conducted under the scope of the bill and limits the duration of surveillance to six months

[…]

In 2021, The New York Times reported that the French Parliament passed a bill that would expand the French police force’s ability to monitor civilians using drones. French President Emmanuel Macron argued at the time that the bill was meant to protect police officers from increasingly violent protestors.

[…]

 

Source: France Passes Bill Allowing Police to Remotely Access Phones

Amazon’s iRobot Roomba acquisition under formal EU investigation

European Union regulators have opened an official investigation into Amazon’s proposed $1.7 billion acquisition of iRobot, the company behind the popular Roomba lineup of robot vacuum cleaners.

In a press release, the European Commission said it’s concerned that “the transaction would allow Amazon to restrict competition in the market for robot vacuum cleaners (‘RVCs’) and to strengthen its position as online marketplace provider.” The European Commission is also looking at how getting access to iRobot users’ data may give Amazon an advantage “in the market for online marketplace services to third-party sellers (and related advertising services) and / or other data-related markets.”

[…]

Source: Amazon’s iRobot Roomba acquisition under formal EU investigation

Do you really want Amazon to know the layout of the interior of your home?

People Are Using Forged Court Orders To Disappear Content They Don’t Like using DMCA

Copyright is still high on the list of censorial weapons. When you live in (or target) a country that protects free speech rights and offers intermediaries immunity via Section 230, you quickly surmise there’s a soft target lying between the First Amendment and the CDA.

That soft target is the DMCA. Thanks to plenty of lived-in experience, services serving millions or billions of users have decided it’s far easier to cater to (supposed) copyright holders than protect their other millions (or billions!) of users from abusive DMCA takedown demands.

There’s no immunity when it comes to the DMCA. There’s only the hope that US courts (should they be actually involved) will view good faith efforts to remove infringing content as acceptable preventative efforts.

But terrible people who neither respect the First Amendment nor the Communications Decency Act have found exploitable loopholes to disappear content they don’t like. And it’s always the worst people doing this. An entire cottage industry of “reputation management” firms has calcified into a so-called business model that views anything as acceptable until a court starts handing down sanctions.

“Cursory review” is the name of the game. Bullshit is fed to DMCA inboxes in hopes the people overseeing millions (or billions!) of pieces of uploaded content won’t spend too much time vetting takedown requests. When the initial takedown requests fail, bullshit artists (some of them hired!) decide to exploit the public sector.

Bogus litigation involving nonexistent defendants gives bad actors the legal paperwork they need to silence their critics. Bullshit default judgments are handed to bad faith plaintiffs by judges who can’t be bothered to do anything other than scan the docket to ensure at least some filings exist.

At the bottom of this miserable rung are the people who can’t even exploit these massively exploitable holes effectively. The bottom dwellers do what’s absolutely illegal, rather than just legally questionable. They forge court orders to demand takedowns of content they don’t like.

Eugene Volokh of the titular Volokh Conspiracy has plenty of experience with every variety of abusive takedown action listed above. In fact, he’s published an entire paper about these multiple levels of bullshit in the Utah Law Review.

Ironically, it’s that very paper that’s triggered the latest round of bogus takedown demands.

Yesterday, I saw that someone tried to use a different scheme, which I briefly mentioned in the article (pp. 300-01), to try to deindex the Utah Law Review version of my article: They sent a Digital Millennium Copyright Act notice to Google claiming that they owned the copyright in my article, and that the Utah Law Review version was an unauthorized copy of the version that I had posted on my own site:

Welcome to the party, “I Liam.”

But who do you represent? Volokh has some idea(s).

The submitter, therefore, asked Google to “deindex” that page—remove it from Google’s indexes, so that people searching for “mergeworthrx” or “stephen cichy” or “anthony minnuto” (another name mentioned on the page) wouldn’t see it.

So what prompted Google to remove this content that “I Liam” wished to disappear on behalf of his benefactors (presumably “mergeworthrx,” “stephen cichy,” and “anthony minnuto”)?

Well, it was a court order — one that was faked by whoever “I Liam” is:

Except there was no court order. Case No. 13-13548 CA was a completely different case. Celia Ampel, a reporter for the South Florida Daily Business Review, was never sued by MergeworthRX. The file submitted to Google was a forgery.

And definitely not an anomaly:

It was one of over 90 documents submitted to Google (and to other hosting platforms) that I believe to be forgeries. 

[…]

Source: Terrible People Are Still Using Forged Court Orders To Disappear Content They Don’t Like | Techdirt

The writer continues to say it’s terrible that there are terrible people and you can’t blame Google, when there is definitely a case to be made that Google can indeed do more due diligence. When the DMCA came into effect, people noted that this was ripe for the raping and so it happened. Alternatives were suggested but discarded. DMCA itself is very very poor law and should be revoked as it protects something we shouldn’t be protecting in the first place and does so in a way that allows people to randomly take down content with almost no recourse.

$6.3b US firm Telesign breached GDPR, reputation-scoring half of the population of the planet with mobiles

A US-based fraud prevention company is in hot water over allegations it not only collected data from millions of EU citizens and processed it using automated tools without their knowledge, but that it did so in the United States, all in violation of the EU’s data protection rules.

The complaint was filed by Austrian privacy advocacy group noyb, helmed by lawyer Max Schrems, and it doesn’t pull any punches in its claims that TeleSign, through its former Belgian parent company BICS, secretly collected data on cellphone users around the world.

That data, noyb alleges, was fed into an automated system that generates “reputation scores” that TeleSign sells to its customers, which includes TikTok, Salesforce, Microsoft and AWS, among others, for verifying the identity of a person behind a phone number and preventing fraud.

BICS, which acquired TeleSign in 2017, describes itself as “a global provider of international wholesale connectivity and interoperability services,” in essence operating as an interchange for various national cellular networks. Per noyb, BICS operates in more than 200 countries around the world and “gets detailed information (e.g. the regularity of completed calls, call duration, long-term inactivity, range activity, or successful incoming traffic) [on] about half of the worldwide mobile phone users.”

That data is regularly shared with TeleSign, noyb alleges, without any notification to the customers whose data is being collected and used.

[…]

In its complaint, an auto-translated English version of which was reviewed by The Register, noyb alleges that TeleSign is in violation of the GDPR’s provisions that ban use of automated profiling tools, as well as rules that require affirmative consent be given to process EU citizen’s data.

[…]

When BICS acquired TeleSign in 2017, it began to fall under the partial control of BICS’ parent company, Belgian telecom giant Proximus. Proximus held a partial stake in BICS, which Proximus spun off from its own operations in 1997.

In 2021, Proximus bought out BICS’ other shareholders, making it the sole owner of both the telecom interchange and TeleSign.

With that in mind, noyb is also leveling charges against Proximus and BICS. In its complaint, noyb said Proximus was asked by EU citizens from various countries to provide records of the data TeleSign processed, as is their right under Article 15 of the GDPR.

The complainants weren’t given the information they requested, says noyb, and claims what was handed over was simply a template copy of the EU’s standard contractual clause (SCC), which has been used by businesses transmitting data between the EU and US while the pair try to work out data transfer rules that Schrems won’t get struck down in court.

[…]

Noyb is seeking cessation of all data transfers from BICS to TeleSign, processing of said data, and is requesting deletion of all unlawfully transmitted data. It’s also asking for Belgian data protection authorities to fine Proximus, which noyb said could reach as high as €236 million ($257 million) – a mere 4 percent of Proximus’s global turnover.

[…]

Source: US firm ‘breached GDPR’ by reputation-scoring EU citizens • The Register

This firm is absolutely massive, yet it’s a smaller part of BICS and chances are that you’ve never ever heard of either of them!

Broadcom squeezed Samsung, now South Korea’s squeezing back

As the Commission explained in a Tuesday adjudicaiton, Broadcom and Samsung were in talks for a long-term supply agreement when the American chipmaker demanded the Korean giant sign or it would suspend shipments and support services.

Broadcom also wanted Samsung to commit to spending over $760 million a year, to make up the difference for any shortfalls, and not to buy from rivals.

With the market for the components it needs tight, Samsung reportedly signed. Then, when a certain viral pandemic cruelled its business, the giant conglomerate found itself having to buy parts it didn’t need. The chaebol estimates the deal cost it millions.

News of the deal eventually reached the regulator, which in 2022 asked Broadcom to propose a remedy – a common method of dispute resolution in South Korea.

Broadcom proposed a $15.5 million fund to stimulate South Korea’s small semiconductor outfits, plus extra support for Samsung.

On Tuesday, the Commission decided that’s not a reasonable restitution because it doesn’t include compensation for the impacted parties.

That’s bad news for Broadcom, because it means the regulator will now escalate matters – first by determining if the chipmaker broke local laws and then by considering a different penalty.

South Korea is protective of its local businesses – even giants like Samsung that are usually capable of fending for themselves. Broadcom reps will soon have some tricky-to-negotiate meetings on their agendas.

At least the corporation’s legal team has experience at this sort of thing. In 2018 it was probed by US authorities over contract practices, and in 2021 was forced to stop some anticompetitive practices. In 2022 it was in strife again – this time for allegedly forcing its customers to sign exclusive supply contracts.

The serial acquirer also lost a regulatory rumble over its attempted acquisition of Qualcomm, and is currently trying to explain why its proposed acquisition of VMware won’t harm competition.

Now it awaits South Korea’s wrath – and perhaps Samsung’s too.

Source: Broadcom squeezed Samsung, now South Korea’s squeezing back • The Register