Police bust pirate streaming service making €250 million per month: doesn’t this show the TV market is hugely broken?

An international law enforcement operation has dismantled a pirate streaming service that served over 22 million users worldwide and made €250 million ($263M) per month.

Italy’s Postal and Cybersecurity Police Service announced the action, codenamed “Taken Down,” stating they worked with Eurojust, Europol, and many other European countries, making this the largest takedown of its kind in Italy and internationally.

“More than 270 Postal Police officers, in collaboration with foreign law enforcement, carried out 89 searches in 15 Italian regions and 14 additional searches in the United Kingdom, the Netherlands, Sweden, Switzerland, Romania, Croatia, and China, involving 102 individuals,” reads the announcement.

“As part of the investigative framework initiated by the Catania Prosecutor’s Office and the Italian Postal Police, and with international cooperation, the Croatian police executed 11 arrest warrants against suspects.”

“Additionally, three high-ranking administrators of the IT network were identified in England and the Netherlands, along with 80 streaming control panels for IPTV channels managed by suspects throughout Italy,” mentions the police in the same announcement.

The pirated TV and content streaming service was operated by a hierarchical, transnational organization that illegally captured and resold the content of popular content platforms.

The copyrighted content included redistributed IPTV, live broadcasts, and on-demand content from major broadcasters like Sky, Dazn, Mediaset, Amazon Prime, Netflix, Disney+, and Paramount.

The police say that these illegal streams were made accessible through numerous live-streaming websites but have not published any domains.

It is estimated that the amount of financial damages suffered annually from the illegal service is a massive €10 billion ($10.5B).

These broadcasts were resold to 22 million subscribed members via multiple distribution channels and an extensive seller network.

As a result of operation “Taken Down,” the authorities seized over 2,500 illegal channels and their servers, including nine servers in Romania and Hong Kong.

[…]

Source: Police bust pirate streaming service making €250 million per month

Bad licensing decisions by TV stations and broadcasters have given these streamers a product that people apparently really really want and are willing to pay for.

Don’t shut down the streamers, shut down the system that makes this kind of product impossible to get.

BBC Gives Away huge Sound Effects Library, with readable and sensible terms of use

BBC Sound Effects website top

Terms for using our content

A few rules to stop you (and us) getting in trouble.

a) Don’t mess with our content
What do we mean by that? This sort of thing:

  • Removing or altering BBC logos, and copyright notices from the content (if there are any)
  • Not removing content from your device or systems when we ask you to. This might happen when we take down content either temporarily or permanently, which we can do at any time, without notice.
b) Don’t use our content for harmful or offensive purposes
Here’s a list of things that may harm or offend:

  • Insulting, misleading, discriminating or defaming (damaging people’s reputations)
  • Promoting pornography, tobacco or weapons
  • Putting children at risk
  • Anything illegal. Like using hate speech, inciting terrorism or breaking privacy law
  • Anything that would harm the BBC’s reputation
  • Using our content for political or social campaigning purposes or for fundraising.
c) Don’t make it look like our content costs money

If you put our content on a site that charges for content, you have to say it is free-to-view.

d) Don’t make our content more prominent than non-BBC content

Otherwise it might look like we’re endorsing you. Which we’re not allowed to do.

Also, use our content alongside other stuff (e.g. your own editorial text). You can’t make a service of your own that contains only our content.

Speaking of which…

e) Don’t exaggerate your relationship with the BBC

You can’t say we endorse, promote, supply or approve of you.

And you can’t say you have exclusive access to our content.

f) Don’t associate our content with advertising or sponsorship
That means you can’t:

  • Put any other content between the link to our content and the content itself. So no ads or short videos people have to sit through
  • Put ads next to or over it
  • Put any ads in a web page or app that contain mostly our content
  • Put ads related to their subject alongside our content. So no trainer ads with an image of shoes
  • Add extra content that means you’d earn money from our content.
g) Don’t be misleading about where our content came from

You can’t remove or alter the copyright notice, or imply that someone else made it.

h) Don’t pretend to be the BBC
That includes:

  • Using our brands, trade marks or logos without our permission
  • Using or mentioning our content in press releases and other marketing materials
  • Making money from our content. You can’t charge people to view our images, for example
  • Sharing our content. For example, no uploading to social media sites. Sharing links is OK.

Source: Licensing | BBC Sound Effects

This is how licenses should be written. Well done, BBC.

Epic Allows Internet Archive To Distribute For Free ‘Unreal’ & ‘Unreal Tournament’ Forever

One of the most frustrating aspects in the ongoing conversation around the preservation of older video games, also known as cultural output, is the collision of IP rights and some publishers’ unwillingness to both continue to support and make available these older games and their refusal to release those same games into the public domain so that others can do so. It creates this crazy situation in which a company insists on retaining its copyrights over a video game that it has effectively disappeared with no good or legitimate way for the public to preserve them. As I’ve argued for some time now, this breaks the copyright contract with the public and should come with repercussions. The whole bargain that is copyright law is that creative works are granted a limited monopoly on the production of that work, with that work eventually arriving into the public domain. If that arrival is not allowed to occur, the bargain is broken, and not by anyone who would supposedly “infringe” on the copyright of that work.

[…]

But it just doesn’t have to be like this. Companies could be willing to give up their iron-fisted control over their IP for these older games they aren’t willing to support or preserve themselves and let others do it for them. And if you need a real world example of that, you need look only at how Epic is working with The Internet Archive to do exactly that.

Epic, now primarily known for Fortnite and the Unreal Engine, has given permission for two of the most significant video games ever made, Unreal and Unreal Tournament, to be freely accessed via the Internet Archive. As spotted by RPS, via ResetEra, the OldUnreal group announced the move on their Discord, along with instructions for how to easily download and play them on modern machines.

Huge kudos to Epic for being cool with this, because while it shouldn’t be unusual to happily let people freely share a three-decade-old game you don’t sell any more, it’s vanishingly rare. And if you remain in any doubt, we just got word back from Epic confirming they’re on board.

“We can confirm that Unreal 1 and Unreal Tournament are available on archive.org,” a spokesperson told us by email, “and people are free to independently link to and play these versions.”

Importantly, OldUnreal and The Internet Archive very much know what they’re doing here. Grabbing the ZIP file for the game sleekly pulls the ISO directly from The Internet Archive, installs it, and there are instructions for how to get the game up and running on modern hardware. This is obviously a labor of love from fans dedicated toward keeping these two excellent games alive.

[…]

But this is just two games. What would be really nice to see is this become a trend, or, better yet, a program run by The Internet Archive. Don’t want to bother to preserve your old game? No problem, let the IA do it for you!

Source: Epic Allows Internet Archive To Distribute For Free ‘Unreal’ & ‘Unreal Tournament’ Forever | Techdirt

HarperCollins Confirms It Has a Deal to Bleed Authors to allow their Work to be used as training for AI Company

HarperCollins, one of the biggest publishers in the world, made a deal with an “artificial intelligence technology company” and is giving authors the option to opt in to the agreement or pass, 404 Media can confirm.

[…]

On Friday, author Daniel Kibblesmith, who wrote the children’s book Santa’s Husband and published it with HarperCollins, posted screenshots on Bluesky of an email he received, seemingly from his agent, informing him that the agency was approached by the publisher about the AI deal. “Let me know what you think, positive or negative, and we can handle the rest of this for you,” the screenshotted text in an email to Kibblesmith says. The screenshots show the agent telling Kibblesmith that HarperCollins was offering $2,500 (non-negotiable).

[…]

“You are receiving this memo because we have been informed by HarperCollins that they would like permission to include your book in an overall deal that they are making with a large tech company to use a broad swath of nonfiction books for the purpose of providing content for the training of an Al language learning model,” the screenshots say. “You are likely aware, as we all are, that there are controversies surrounding the use of copyrighted material in the training of Al models. Much of the controversy comes from the fact that many companies seem to be doing so without acknowledging or compensating the original creators. And of course there is concern that these Al models may one day make us all obsolete.”

“It seems like they think they’re cooked, and they’re chasing short money while they can. I disagree,” Kibblesmith told the AV Club. “The fear of robots replacing authors is a false binary. I see it as the beginning of two diverging markets, readers who want to connect with other humans across time and space, or readers who are satisfied with a customized on-demand content pellet fed to them by the big computer so they never have to be challenged again.”

Source: HarperCollins Confirms It Has a Deal to Sell Authors’ Work to AI Company

Now the copyright industry wants to apply deep, automated blocking to the Internet’s core routers

A central theme of Walled Culture the book (free digital versions available) and this blog is that the copyright industry is never satisfied. Now matter how long the term of copyright, publishers and recording companies want more. No matter how harsh the punishments for infringement, the copyright intermediaries want them to be even more severe.

Another manifestation of this insatiability is seen in the ever-widening use of Internet site blocking. What began as a highly-targeted one-off in the UK, when a court ordered the Newzbin2 site to be blocked, has become a favoured method of the copyright industry for cutting off access to thousands of sites around the world, including many blocked by mistake. Even more worryingly, the approach has led to blocks being implemented in some key parts of the Internet’s infrastructure that have no involvement with the material that flows through them: they are just a pipe. For example, last year we wrote about courts ordering the content delivery network Cloudflare to block sites. But even that isn’t enough it seems. A post on TorrentFreak reports on a move to embed site blocking at the very heart of the Internet. This emerges from an interview about the Brazilian telecoms regulator Anatel:

In an interview with Tele.Sintese, outgoing Anatel board member Artur Coimbra recalls the lack of internet infrastructure in Brazil as recently as 2010. As head of the National Broadband Plan under the Ministry of Communications, that’s something he personally addressed. For Anatel today, blocking access to pirate websites and preventing unauthorized devices from communicating online is all in a day’s work.

Here’s the key revelation spotted by TorrentFreak:

“The second step, which we still need to evaluate because some companies want it, and others are more hesitant, is to allow Anatel to have access to the core routers to place a direct order on the router,” Coimbra reveals, referencing IPTV [Internet Protocol television] blocking.

“In these cases, these companies do not need to have someone on call to receive the [blocking] order and then implement it.”

Later on, Coimbra clarifies how far along this plan is:

“Participation is voluntary. We are still testing with some companies. So, it will take some time until it actually happens,” Coimbra says. “I can’t say [how long]. Our inspection team is carrying out tests with some operators, I can’t say which ones.”

Even if this is still in the testing phase, and only with “some” companies, it’s a terrible precedent. It means that blocking – and thus censorship – can be applied automatically, possibly without judicial oversight, to some of the most fundamental parts of the Internet’s plumbing. Once that happens, it will spread, just as the original single site block in the UK has spread worldwide. There’s even a hint that might already be happening. Asked if such blocking is being applied anywhere else, Coimbra replies:

“I don’t know. Maybe in Spain and Portugal, which are more advanced countries in this fight. But I don’t have that information,” Coimbra responds, randomly naming two countries with which Brazil has consulted extensively on blocking matters.

Although it’s not clear from that whether Spain and Portugal are indeed taking this route, the fact that Coimbra suggests that they might be is deeply troubling. And even if they aren’t, we can be sure that the copyright industry will keep demanding Internet blocks and censorship at the deepest level until they get them.

Source: Now the copyright industry wants to apply deep, automated blocking to the Internet’s core routers – Walled Culture

Judge: Just Because AI Trains On Your Publication, Doesn’t Mean It Infringes On Your Copyright. Another case thrown out.

I get that a lot of people don’t like the big AI companies and how they scrape the web. But these copyright lawsuits being filed against them are absolute garbage. And you want that to be the case, because if it goes the other way, it will do real damage to the open web by further entrenching the largest companies. If you don’t like the AI companies find another path, because copyright is not the answer.

So far, we’ve seen that these cases aren’t doing all that well, though many are still ongoing.

Last week, a judge tossed out one of the early ones against OpenAI, brought by Raw Story and Alternet.

Part of the problem is that these lawsuits assume, incorrectly, that these AI services really are, as some people falsely call them, “plagiarism machines.” The assumption is that they’re just copying everything and then handing out snippets of it.

But that’s not how it works. It is much more akin to reading all these works and then being able to make suggestions based on an understanding of how similar things kinda look, though from memory, not from having access to the originals.

Some of this case focused on whether or not OpenAI removed copyright management information (CMI) from the works that they were being trained on. This always felt like an extreme long shot, and the court finds Raw Story’s arguments wholly unconvincing in part because they don’t show any work that OpenAI distributed without their copyright management info.

For one thing, Plaintiffs are wrong that Section 1202 “grant[ s] the copyright owner the sole prerogative to decide how future iterations of the work may differ from the version the owner published.” Other provisions of the Copyright Act afford such protections, see 17 U.S.C. § 106, but not Section 1202. Section 1202 protects copyright owners from specified interferences with the integrity of a work’s CMI. In other words, Defendants may, absent permission, reproduce or even create derivatives of Plaintiffs’ works-without incurring liability under Section 1202-as long as Defendants keep Plaintiffs’ CMI intact. Indeed, the legislative history of the DMCA indicates that the Act’s purpose was not to guard against property-based injury. Rather, it was to “ensure the integrity of the electronic marketplace by preventing fraud and misinformation,” and to bring the United States into compliance with its obligations to do so under the World Intellectual Property Organization (WIPO) Copyright Treaty, art. 12(1) (“Obligations concerning Rights Management Information”) and WIPO Performances and Phonograms Treaty….

Moreover, I am not convinced that the mere removal of identifying information from a copyrighted work-absent dissemination-has any historical or common-law analogue.

Then there’s the bigger point, which is that the judge, Colleen McMahon, has a better understanding of how ChatGPT works than the plaintiffs and notes that just because ChatGPT was trained on pretty much the entire internet, that doesn’t mean it’s going to infringe on Raw Story’s copyright:

Plaintiffs allege that ChatGPT has been trained on “a scrape of most of the internet,” Compl. , 29, which includes massive amounts of information from innumerable sources on almost any given subject. Plaintiffs have nowhere alleged that the information in their articles is copyrighted, nor could they do so. When a user inputs a question into ChatGPT, ChatGPT synthesizes the relevant information in its repository into an answer. Given the quantity of information contained in the repository, the likelihood that ChatGPT would output plagiarized content from one of Plaintiffs’ articles seems remote.

Finally, the judge basically says, “Look, I get it, you’re upset that ChatGPT read your stuff, but you don’t have an actual legal claim here.”

Let us be clear about what is really at stake here. The alleged injury for which Plaintiffs truly seek redress is not the exclusion of CMI from Defendants’ training sets, but rather Defendants’ use of Plaintiffs’ articles to develop ChatGPT without compensation to Plaintiffs. See Compl. ~ 57 (“The OpenAI Defendants have acknowledged that use of copyright-protected works to train ChatGPT requires a license to that content, and in some instances, have entered licensing agreements with large copyright owners … They are also in licensing talks with other copyright owners in the news industry, but have offered no compensation to Plaintiffs.”). Whether or not that type of injury satisfies the injury-in-fact requirement, it is not the type of harm that has been “elevated” by Section 1202(b )(i) of the DMCA. See Spokeo, 578 U.S. at 341 (Congress may “elevate to the status of legally cognizable injuries, de facto injuries that were previously inadequate in law.”). Whether there is another statute or legal theory that does elevate this type of harm remains to be seen. But that question is not before the Court today.

While the judge dismisses the case with prejudice and says they can try again, it would appear that she is skeptical they could do so with any reasonable chance of success:

In the event of dismissal Plaintiffs seek leave to file an amended complaint. I cannot ascertain whether amendment would be futile without seeing a proposed amended pleading. I am skeptical about Plaintiffs’ ability to allege a cognizable injury but, at least as to injunctive relief, I am prepared to consider an amended pleading.

I totally get why publishers are annoyed and why they keep suing. But copyright is the wrong tool for the job. Hopefully, more courts will make this clear and we can get past all of these lawsuits.

Source: Judge: Just Because AI Trains On Your Publication, Doesn’t Mean It Infringes On Your Copyright | Techdirt

The Open Source Project DeFlock Is Mapping License Plate Surveillance Cameras All Over the World

[…] Flock is one of the largest vendors of automated license plate readers (ALPRs) in the country. The company markets itself as having the goal to fully “eliminate crime” with the use of ALPRs and other connected surveillance cameras, a target experts say is impossible.

In Huntsville, Freeman noticed that license plate reader cameras were positioned in a circle at major intersections, forming a perimeter that could track any car going into or out of the city’s downtown. He started to look for cameras all over Huntsville and the surrounding areas, and soon found that Flock was not the only game in town. He found cameras owned by Motorola, and a third, owned by a company called Avigilon (a subsidiary of Motorola). Flock and automated license plate reader cameras owned by other companies are now in thousands of neighborhoods around the country. Many of these systems talk to each other and plug into other surveillance systems, making it possible to track people all over the country.

[…]

And so he made a map, and called it DeFlock. DeFlock runs on Open Street Map, an open source, editable mapping software. He began posting signs for DeFlock to the posts holding up Huntsville’s ALPR cameras, and made a post about the project to the Huntsville subreddit, which got good attention from people who lived there.

[…]

When I first talked to Freeman, DeFlock had a few dozen cameras mapped in Huntsville and a handful mapped in Southern California and in the Seattle suburbs. A week later, as I write this, DeFlock has crowdsourced the locations of thousands of cameras in dozens of cities across the United States and the world.

“It still just scratches the surface,” Freeman said. “I added another page to the site that tracks cities and counties who have transparency reports on Flock’s site, and many of those don’t have any reported ALPRs though, so it’ll help people focus on where to look for them.”

[…]

He said so far more than 1,700 cameras have been reported in the United States and more than 5,600 have been reported around the world. He has also begun scraping parts of Flock’s website to give people a better idea of where to look to map them. For example, Flock says that Colton, California, a city with just over 50,000 people outside of San Bernardino, has 677 cameras.

A ring of Flock cameras in Huntsville’s downtown, pointing outward.

People who submit cameras to DeFlock have the ability to note the direction that they are pointing in, which can help people understand how these cameras are being positioned and the strategies that companies and police departments are using when deploying them.

[…]

Freeman also said he eventually wants to find a way to offer navigation directions that will allow people to avoid known ALPR cameras. The fact that it is impossible to drive in some cities without being passing ALPR cameras that track and catalog your car’s movements is one of the core arguments in a Fourth Amendment challenge to Flock’s existence in Norfolk, Virginia; this project will likely show how infeasible traveling without being tracked actually is in America. Knowing where they are is the first step toward resisting them.

Source: The Open Source Project DeFlock Is Mapping License Plate Surveillance Cameras All Over the World

Singapore to increase road capacity by GPS tracking all vehicles. Because location data is not sensitive and will never be hacked *cough*

Singapore’s Land Transport Authority (LTA) estimated last week that by tracking all vehicles with GPS it will be able to increase road capacity by 20,000 over the next few years.

The densely populated island state is moving from what it calls Electric Road Pricing (ERP) 1.0 to ERP 2.0. The first version used gantries – or automatic tolls – to charge drivers a fee through an in-car device when they used specific roadways during certain hours.

ERP 2.0 sees the vehicle instead tracked through GPS, which can tell where a vehicle is at all operating times.

“ERP 2.0 will provide more comprehensive aggregated traffic information and will be able to operate without physical gantries. We will be able to introduce new ‘virtual gantries,’ which allow for more flexible and responsive congestion management,” explained the LTA.

But the island’s government doesn’t just control inflow into urban areas through toll-like charging – it also aggressively controls the total number of cars operating within its borders.

Singapore requires vehicle owners to bid for a set number of Certificates of Entitlement – costly operating permits valid for only ten years. The result is an increase of around SG$100,000 ($75,500) every ten years, depending on that year’s COE price, on top of a car’s usual price. The high total price disincentivizes mass car ownership, which helps the government manage traffic and emissions.

[…]

Source: Singapore to increase road capacity by GPS tracking vehicles • The Register

Washington Post and NYTimes suppressed by fascist Trump Through Billionaire Cowardice

Newspaper presidential endorsements may not actually matter that much, but billionaire media owners blocking editorial teams from publishing their endorsements out of concern over potential retaliation from a future Donald Trump presidency should matter a lot.

If people were legitimately worried about the “weaponization of government” and the idea that companies might silence speech over threats from the White House, what has happened over the past few days should raise alarm bells. But somehow I doubt we’ll be seeing the folks who were screaming bloody murder over the nothingburger that was the Murthy lawsuit saying a word of concern about billionaire media owners stifling the speech of their editorial boards to curry favor with Donald Trump.

In 2017, the Washington Post changed its official slogan to “Democracy Dies in Darkness.”

The phrase was apparently a favorite of Bob Woodward, who was one of the main reporters who broke the Watergate story decades ago. Lots of people criticized the slogan at the time (and have continued to do so since then), but no more so than today, as Jeff Bezos apparently stepped in to block the newspaper from endorsing Kamala Harris for President.

An endorsement of Harris had been drafted by Post editorial page staffers but had yet to be published, according to two people who were briefed on the sequence of events and who spoke on the condition of anonymity because they were not authorized to speak publicly. The decision to no longer publish presidential endorsements was made by The Post’s owner, Amazon founder Jeff Bezos, according to the same two people.

This comes just days after a similar situation with the LA Times, whose billionaire owner, Patrick Soon-Shiong, similarly blocked the editorial board from publishing its planned endorsement of Harris. Soon-Shiong tried to “clarify” by claiming he had asked the team to instead publish something looking at the pros and cons of each candidate. However, as members of the editorial board noted in response, that’s what you’d expect the newsroom to do. The editorial board is literally supposed to express its opinion.

In the wake of that decision, at least three members of the LA Times editorial board have resigned. Mariel Garza quit almost immediately, and Robert Greene and Karin Klein followed a day later. As of this writing, it appears at least one person, editor-at-large Robert Kagan, has resigned from the Washington Post.

Or, as the Missing The Point account on Bluesky noted, perhaps the Washington Post is changing its slogan to “Hello Darkness My Old Friend”:

Marty Baron, who had been the Executive Editor of the Washington Post when it chose “Democracy Dies in Darkness” as a slogan, called Bezos’ decision out as “cowardice” and warned that Trump would see this as a victory of his intimidation techniques, and it would embolden him:

The thing is, for all the talk over the past decade or so about “free speech” and “the weaponization of government,” this sure looks like these two billionaires suppressing speech from their organizations over fear of how Trump will react, should he be elected.

During his last term, Donald Trump famously targeted Amazon in retaliation for coverage he didn’t like from the Washington Post. His anger at WaPo coverage caused him to ask the Postmaster General to double Amazon’s postage rates. Trump also told his Secretary of Defense James Mattis to “screw Amazon” and to kill a $10 billion cloud computing deal the Pentagon had lined up.

For all the (misleading) talk about the Biden administration putting pressure on tech companies, what Trump did there seemed like legitimate First Amendment violations. He punished Amazon for speech he didn’t like. It’s funny how all the “weaponization of the government” people never made a peep about any of that.

As for Soon-Shiong, it’s been said that he angled for a cabinet-level “health care czar” position in the last Trump administration, so perhaps he’s hoping to increase his chances this time around.

In both cases, though, this sure looks like Trump’s past retaliations and direct promises of future retaliation against all who have challenged him are having a very clear censorial impact. In the last few months Trump has been pretty explicit that, should he win, he intends to punish media properties that reported on him in ways he dislikes. These are all reasons why anyone who believes in free speech should be speaking out about the dangers of Donald Trump towards our most cherished First Amendment rights.

Especially those in the media.

Bezos and Soon-Shiong are acting like cowards. Rather than standing up and doing what’s right, they’re pre-caving, before the election has even happened. It’s weak and pathetic, and Trump will see it (accurately) to mean that he can continue to walk all over them, and continue to get the media to pull punches by threatening retaliation.

If democracy dies in darkness, it’s because Bezos and Soon-Shiong helped turn off the light they were carrying.

Source: Democracy Dies In Darkness… Helped Along By Billionaire Cowardice | Techdirt

Feds Say You Don’t Have a Right to Check Out Retro Video Games Like Library Books. Want you to pirate them apparently.

Most of the world’s video games from close to 50 years of history are effectively, legally dead. A Video Games History Foundation study found you can’t buy nearly 90% of games from before 2010. Preservationists have been looking for ways to allow people to legally access gaming history, but the U.S. Copyright Office dealt them a heavy blow Friday. Feds declared that you or any researcher has no right to access old games under the Digital Millennium Copyright Act, or DMCA.

Groups like the VGHF and the Software Preservation Network have been putting their weight behind an exemption to the DMCA surrounding video game access. The law says that you can’t remotely access old, defunct games that are still under copyright without a license, even though they’re not available for purchase. Current rules in the DMCA restrict libraries and repositories of old games to one person at a time, in person.

The foundation’s proposed exemption would have allowed more than one person at a time to access the content stored in museums, archives, and libraries. This would allow players to access a piece of video game history like they would if they checked out an ebook from a library. The VGHF and SPN argued that if the museum has several copies of a game in its possession, then it should be able to allow as many people to access the game as there are copies available.

In the Copyright Office’s decision dated Oct. 18 (found on Page 30), Director Shira Perlmutter agreed with multiple industry groups, including the Entertainment Software Association. She recommended the Library of Congress keep the same restrictions. Section 1201 of the DMCA restricts “unauthorized” access to copyrighted works, including games. However, it allows the Library of Congress to allow some classes of people to circumvent those restrictions.

In a statement, the VGHF said lobbying efforts from rightsholders “continue to hold back progress.” The group pointed to comments from a representative from the ESA. An attorney for the ESA told Ars Technica, “I don’t think there is at the moment any combinations of limitations that ESA members would support to provide remote access.”

Video game preservationists said these game repositories could provide full-screen popups of copyright notices to anybody who checked out a game. They would also restrict access to a time limit or force users to access via “technological controls,” like a purpose-built distribution of streaming platforms.

Industry groups argued that those museums didn’t have “appropriate safeguards” to prevent users from distributing the games once they had them in hand. They also argued that there’s a “substantial market” for older or classic games, and a new, free library to access games would “jeopardize” this market. Perlmutter agreed with the industry groups.

“While the Register appreciates that proponents have suggested broad safeguards that could deter recreational uses of video games in some cases, she believes that such requirements are not specific enough to conclude that they would prevent market harms,” she wrote.

Do libraries that lend books hurt the literary industry? In many cases, publishers see libraries as free advertising for their products. It creates word of mouth, and since libraries only have a limited number of copies, those who want a book to read for longer are incentivized to purchase one. The video game industry is so effective at shooting itself in the foot that it doesn’t even recognize when third-party preservationists are actively about to help them for no cost on the publishers’ part.

If there is such a substantial market for classic games, why are so many still unavailable for purchase? Players will inevitably turn to piracy or emulation if there’s no easy-to-access way of playing older games.

“The game industry’s absolutist position… forces researchers to explore extra-legal methods to access the vast majority of out-of-print video games that are otherwise unavailable,” the VGHF wrote.

Source: Feds Say You Don’t Have a Right to Check Out Retro Video Games Like Library Books

Juicy Licensing Deals With AI Companies Show That Publishers Don’t Actually Care About Creators

One of the many interesting aspects of the current enthusiasm for generative AI is the way that it has electrified the formerly rather sleepy world of copyright. Where before publishers thought they had successfully locked down more or less everything digital with copyright, they now find themselves confronted with deep-pocketed companies – both established ones like Google and Microsoft, and newer ones like OpenAI – that want to overturn the previous norms of using copyright material. In particular, the latter group want to train their AI systems on huge quantities of text, images, videos and sounds.

As Walled Culture has reported, this has led to a spate of lawsuits from the copyright world, desperate to retain their control over digital material. They have framed this as an act of solidarity with the poor exploited creators. It’s a shrewd move, and one that seems to be gaining traction. Lots of writers and artists think they are being robbed of something by Big AI, even though that view is based on a misunderstanding of how generative AI works. However, in the light of stories like one in The Bookseller, they might want to reconsider their views about who exactly is being evil here:

Academic publisher Wiley has revealed it is set to make $44 million (£33 million) from Artificial Intelligence (AI) partnerships that it is not giving authors the opportunity to opt-out from.

As to whether authors would share in that bounty:

A spokesperson confirmed that Wiley authors are set to receive remuneration for the licensing of their work based on their “contractual terms”.

That might mean they get nothing, if there is no explicit clause in their contract about sharing AI licensing income. For example, here’s what is happening with the publisher Taylor & Francis:

In July, authors hit out another academic publisher, Taylor & Francis, the parent company of Routledge, over an AI deal with Microsoft worth $10 million, claiming they were not given the opportunity to opt out and are receiving no extra payment for the use of their research by the tech company. T&F later confirmed it was set to make $75 million from two AI partnership deals.

It’s not just in the world of academic publishing that deals are being struck. Back in July, Forbes reported on a “flurry of AI licensing activity”:

The most active area for individual deals right now by far—judging from publicly known deals—is news and journalism. Over the past year, organizations including Vox Media (parent of New York magazine, The Verge, and Eater), News Corp (Wall Street Journal, New York Post, The Times (London)), Dotdash Meredith (People, Entertainment Weekly, InStyle), Time, The Atlantic, Financial Times, and European giants such as Le Monde of France, Axel Springer of Germany, and Prisa Media of Spain have each made licensing deals with OpenAI.

In the absence of any public promises to pass on some of the money these licensing deals will bring, it is not unreasonable to assume that journalists won’t be seeing much if any of it, just as they aren’t seeing much from the link tax.

The increasing number of such licensing deals between publishers and AI companies shows that the former aren’t really too worried about the latter ingesting huge quantities of material for training their AI systems, provided they get paid. And the fact that there is no sign of this money being passed on in its entirety to the people who actually created that material, also confirms that publishers don’t really care about creators. In other words, it’s pretty much what was the status quo before generative AI came along. For doing nothing, the intermediaries are extracting money from the digital giants by invoking the creators and their copyrights. Those creators do all the work, but once again see little to no benefit from the deals that are being signed behind closed doors.

Source: Juicy Licensing Deals With AI Companies Show That Publishers Don’t Actually Care About Creators | Techdirt

Google changes Terms Of Service, now spies on your AI prompts

The new terms come in on November 15th.

4.3 Generative AI Safety and Abuse. Google uses automated safety tools to detect abuse of Generative AI Services. Notwithstanding the “Handling of Prompts and Generated Output” section in the Service Specific Terms, if these tools detect potential abuse or violations of Google’s AUP or Prohibited Use Policy, Google may log Customer prompts solely for the purpose of reviewing and determining whether a violation has occurred. See the Abuse Monitoring documentation page for more information about how logging prompts impacts Customer’s use of the Services.

Source: Google Cloud Platform Terms Of Service

Both uBlock Origin and Lite face browser problems

Both uBlock Origin and its smaller sibling, uBlock Origin Lite, are experiencing problems thanks to browser vendors that really ought to know better.

Developer Raymond Hill, or gorhill on GitHub, is one of the biggest unsung heroes of the modern web. He’s the man behind two of the leading browser extensions to block unwanted advertising, the classic uBlock Origin and its smaller, simpler relation, uBlock Origin Lite. They both do the same job in significantly different ways, so depending on your preferred browser, you now must make a choice.

Gorhill reports on GitHub that an automated code review by Mozilla flagged problems with uBlock Origin Lite. As a result, he has pulled the add-on from Mozilla’s extensions site. The extension’s former page now just says “Oops! We can’t find that page”. You can still install it direct from GitHub, though.

The good news is that the full-fat version, uBlock Origin, is still there, so you can choose that. Hill has a detailed explanation of why and how uBlock Origin works best on Firefox. It’s a snag, though, if like The Reg FOSS desk you habitually run both Firefox and Chrome and wanted to keep both on the same ad blocker.

That’s because, as The Register warned back in August, Google’s new Manifest V3 extensions system means the removal of Manifest V2 – upon which uBlock Origin depends. For now, it still works – this vulture is running Chrome version 130 and uBO is still functioning. It’s still available on Google’s web extensions store, with a slightly misleading warning:

This extension may soon no longer be supported because it doesn’t follow best practices for Chrome extensions.

So, if you use Chrome, or a Chrome-based browser – which is most of them – then you will soon be compelled to remove uBO and switch to uBlock Origin Lite instead.

It would surely be overly cynical of us to suggest that issues with ad blockers were a foreseeable difficulty now that Mozilla is an advertising company.

To sum up, if you have a Mozilla-family browser, uBlock Origin is the easier option. If you have a Chrome-family browser, such as Microsoft Edge, then, very soon, uBlock Origin Lite will be the only version available to you.

There are other in-browser ad-blocking options out there, of course.

Linux users may well want to consider having Privoxy running in the background as well. For example, on Ubuntu and Debian-family distros, just type sudo apt install -y privoxy and reboot. If you run your own home network, maybe look into configuring an old Raspberry Pi with Pi-hole.

uBlock Origin started out as a fork of uBlock, which is now owned by the developers of AdBlock – which means that, as The Register said in 2021, it is “made by an advertising company that brokers ‘acceptable ads.'”

If acceptable ads don’t sound so bad – and to be fair, they’re better than the full Times-Square-neon-infested experience of much of the modern web – then you can still install the free AdBlock Plus, which is in both the Mozilla’s store and in the Chrome store.

Source: Both uBlock Origin and Lite face browser problems • The Register

German court: LAION’s generative AI training dataset is legal thanks to EU copyright exceptions

The copyright world is currently trying to assert its control over the new world of generative AI through a number of lawsuits, several of which have been discussed previously on Walled Culture. We now have our first decision in this area, from the regional court in Hamburg. Andres Guadamuz has provided an excellent detailed analysis of a ruling that is important for the German judges’ discussion of how EU copyright law applies to various aspects of generative AI. The case concerns the freely-available dataset from LAION (Large-scale Artificial Intelligence Open Network), a German non-profit. As the LAION FAQ says: “LAION datasets are simply indexes to the internet, i.e. lists of URLs to the original images together with the ALT texts found linked to those images.” Guadamuz explains:

The case was brought by German photographer Robert Kneschke, who found that some of his photographs had been included in the LAION dataset. He requested the images to be removed, but LAION argued that they had no images, only links to where the images could be found online. Kneschke argued that the process of collecting the dataset had included making copies of the images to extract information, and that this amounted to copyright infringement.

LAION admitted making copies, but said that it was in compliance with the exception for text and data mining (TDM) present in German law, which is a transposition of Article 3 of the 2019 EU Copyright Directive. The German judges agreed:

The court argued that while LAION had been used by commercial organisations, the dataset itself had been released to the public free of charge, and no evidence was presented that any commercial body had control over its operations. Therefore, the dataset is non-commercial and for scientific research. So LAION’s actions are covered by section 60d of the German Copyright Act

That’s good news for LAION and its dataset, but perhaps more interesting for the general field of generative AI is the court’s discussion of how the EU Copyright Directive and its exceptions apply to AI training. It’s a key question because copyright companies claim that they don’t, and that when such training involves copyright material, permission is needed to use it. Guadamuz summarises that point of view as follows:

the argument is that the legislators didn’t intend to cover generative AI when they passed the [EU Copyright Directive], so text and data mining does not cover the training of a model, just the making of a copy to extract information from it. The argument is that making a copy to extract information to create a dataset is fine, as the court agreed here, but the making of a copy in order to extract information to make a model is not. I somehow think that this completely misses the way in which a model is trained; a dataset can have copies of a work, or in the case of LAION, links to the copies of the work. A trained model doesn’t contain copies of the works with which it was trained, and regurgitation of works in the training data in an output is another legal issue entirely.

The judgment from the Hamburg court says that while legislators may not have been aware of generative AI model training in 2019, when they drew up the EU Copyright Directive, they certainly are now. The judges use the EU’s 2024 AI Act as evidence of this, citing a paragraph that makes explicit reference to AI models complying with the text and data mining regulation in the earlier Copyright Directive.

As Guadamuz writes in his post, this is an important point, but the legal impact may be limited. The judgment is only the view of a local German court, so other jurisdictions may produce different results. Moreover, the original plaintiff Robert Kneschke may appeal and overturn the decision. Furthermore, the ruling only concerns the use of text and data mining to create a training dataset, not the actual training itself, although the judges’ thoughts on the latter indicate that it would be legal too. In other words, this local outbreak of good sense in Germany is welcome, but we are still a long way from complete legal clarity on the training of generative AI systems on copyright material.

Source: German court: LAION’s generative AI training dataset is legal thanks to EU copyright exceptions – Walled Culture

Penguin Random House is adding an AI warning to its books’ copyright pages fwiw

Penguin Random House, the trade publisher, is adding language to the copyright pages of its books to prohibit the use of those books to train AI.

The Bookseller reports that new books and reprints of older titles from the publisher will now include the statement, “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.”

While the use of copyrighted material to train AI models is currently being fought over in multiple lawsuits, Penguin Random House appears to be the first major publisher to update its copyright pages to reflect these new concerns.

The update doesn’t mean Penguin Random House is completely opposed to the use of AI in book publishing. In August, it outlined an initial approach to generative AI, saying it will “vigorously defend the intellectual property that belongs to our authors and artists” while also promising to “use generative AI tools selectively and responsibly, where we see a clear case that they can advance our goals.”

Source: Penguin Random House is adding an AI warning to its books’ copyright pages | TechCrunch

Penguin spins it in support of authors, but the whole copyright thing only really fills the pockets of the publishers (eg. Juicy licensing deals with AI companies show that publishers don’t really care about creators). This will probably not hold up in court.

If You Ever Rented From Redbox, Your Private Info Is Up for Grabs

If you’ve ever opted to rent a movie through a Redbox kiosk, your private info is out there waiting for any tinkerer to get their hands on it. One programmer who reverse-engineered a kiosk’s hard drive proved the Redbox machines can cough up transaction histories featuring customers’ names, emails, and rentals going back nearly a decade. It may even have part of your credit card number stored on-device.

[…]

a California-based programmer named Foone Turing, managed to grab an unencrypted file from the internal hard drive containing a file that showed the emails, home addresses, and the rental history for either a fraction or the whole of those who previously used the kiosk.

[…]

Turing told Lowpass that the Redbox stored some financial information on those drives, including the first six and last four digits of each credit card used and “some lower-level transaction details.” The devices did apparently connect to a secure payment system through Redbox’s servers, but the systems stored financial information on a log in a different folder than the rental records. She told us that it’s likely the system only stored the last month of transaction logs.

[…]

Source: If You Ever Rented From Redbox, Your Private Info Is Up for Grabs

Which is a great illustration why there needs to be some regulations about what happens to personal data when a company is sold or goes bust.

OpenAI’s GPT Store Has Left Some Developers in the Lurch

[…] when OpenAI CEO Sam Altman spoke at the dev day, he touched on potential earning opportunities for developers.

“Revenue sharing is important to us,” Altman said.” We’re going to pay people who build the most useful and the most-used GPTs a portion of our revenue.”

[…]

Books GPT, which churns out personalized book recommendations and was promoted by OpenAI at the Store’s launch, is his most popular.

But 10 months after its launch, it seems that revenue-sharing has been reserved for a tiny number of developers in an invite-only pilot program run by OpenAI. Villocido, despite his efforts, wasn’t included.

According to Villocido and other small developers who spoke with WIRED, OpenAI’s GPT Store has been a mixed bag. These developers say that OpenAI’s analytics tools are lacking and that they have no real sense of how their GPTs are performing. OpenAI has said that GPT creators outside of the US, like Villocido, are not eligible for revenue-sharing.

Those who are able to make money from their GPTs usually devise workarounds, like placing affiliate links or advertising within their GPTs. Other small developers have used the success of their GPTs to market themselves while raising outside funding.

[…]

Copywriter GPT, his GPT that drafts advertising copy, has had between 500,000 and 600,000 interactions. Like Villocido’s Books GPT, Lin’s has been featured on the homepage of OpenAI’s Store.

But Lin can’t say exactly how much traction his GPTs have gotten or how frequently they are used, because OpenAI only provides “rough estimations” to small developers like him. And since he’s in Singapore, he won’t receive any payouts from OpenAI for the usage of his app.

[…]

the creator of the Books GPT that was featured in the Store launch, he found he could no longer justify the $20 per month cost of the ChatGPT subscription required to build and maintain his custom GPTs.

He now collects a modest amount of revenue each month by placing ads in the GPTs he has already created, using a chatbot ad tool called Adzedek. On a good month, he can generate $200 a month in revenue. But he chooses not to funnel that back into ChatGPT.

Source: OpenAI’s GPT Store Has Left Some Developers in the Lurch | WIRED

Face matching now available on GSA’s login.gov, however it still doesn’t work in minimum 10% of the time

The US government’s General Services Administration’s (GSA) facial matching login service is now generally available to the public and other federal agencies, despite its own recent report admitting the tech is far from perfect.

The GSA announced general availability of remote identity verification (RiDV) technology through login.gov, and the service’s availability to other federal government agencies yesterday. According to the agency, the technology behind the offering is “a new independently certified” solution that complies with the National Institute of Standards and Technology’s (NIST) 800-63 identity assurance level 2 (IAL2) standard.

IAL2 identity verification involves using either remote or in-person verification of a person’s identity via biometric data along with some physical element, like an ID photograph, access to a cellphone number, for example.

“This new IAL2-compliant offering adds proven one-to-one facial matching technology that allows Login.gov to confirm that a live selfie taken by a user matches the photo on a photo ID, such as a driver’s license, provided by the user,” the GSA said.

The Administration noted that the system doesn’t use “one-to-many” face matching technology to compare users to others in its database, and doesn’t use the images for any purpose other than verifying a user’s identity.

[…]

In a report issued by the GSA’s Office of the Inspector General in early 2023, the Administration was called out for saying it implemented IAL2-level identity verification as early as 2018, but never actually supporting the requirements to meet the standard.

“GSA knowingly billed customer agencies over $10 million for services, including alleged IAL2 services that did not meet IAL2 standards,” the report claimed.

[…]

Fast forward to October of last year, and the GSA said it was embracing facial recognition tech on login.gov with plans to test it this year – a process it began in April.  Since then, however, the GSA has published pre-press findings of a study it conducted of five RiDV technologies, finding that they’re still largely unreliable.

The study anonymized the results of the five products, making it unclear which were included in the final pool or how any particular one performed. Generally, however, the report found that the best-performing product still failed 10 percent of the time, and the worst had a false negative rate of 50 percent, meaning its ability to properly match a selfie to a government ID was no better than chance.

Higher rejection rates for people with darker skin tones were also noted in one product, while another was more accurate for people of AAPI descent, but less accurate for everyone else – hardly the equitability the GSA said it wanted in an RiDV product last year.

[…]

It’s unclear what solution has been deployed for use on login.gov. The only firm we can confirm has been involved though the process is LexisNexis, which previously acknowledged to The Register that it has worked with the GSA on login.gov for some time.

That said, LexisNexis’ CEO for government risk solutions told us recently that he’s not convinced the GSA’s focus on adopting IAL2 RiDV solutions at the expense of other biometric verification methods is the best approach.

“Any time you rely on a single tool, especially in the modern era of generative AI and deep fakes … you are going to have this problem,” Haywood “Woody” Talcove told us during a phone interview last month. “I don’t think NIST has gone far enough with this workflow.”

Talcove told us that facial recognition is “pretty easy to game,” and said he wants a multi-layered approach – one that it looks like GSA has declined to pursue given how quickly it’s rolling out a solution.

“What this study shows is that there’s a level of risk being injected into government agencies completely relying on one tool,” Talcove said. “We’ve gotta go further.”

Along with asking the GSA for more details about its chosen RiDV solution, we also asked for some data about its performance. We didn’t get an answer to that question, either.

Source: Face matching now available on GSA’s login.gov • The Register

Italy is losing its mind because of copyright: it just made its awful Piracy Shield even worse

Walled Culture has been writing about Italy’s Piracy Shield system for a year now. It was clear from early on that its approach of blocking Internet addresses (IP addresses) to fight alleged copyright infringement – particularly the streaming of football matches – was flawed, and risked turning into another fiasco like France’s failed Hadopi law. The central issue with Piracy Shield is summed up in a recent post on the Disruptive Competition Blog:

The problem is that Italy’s Piracy Shield enables the blocking of content at the IP address and DNS level, which is particularly problematic in this time of shared IP addresses. It would be similar to arguing that if in a big shopping mall, in which dozens of shops share the same address, one shop owner is found to sell bootleg vinyl records with pirated music, the entire mall needs to be closed and all shops are forced to go out of business.

As that post points out, Italy’s IP blocking suffers from several underlying problems. One is overblocking, which has already happened, as Walled Culture noted back in March. Another issue is lack of transparency:

The Piracy Shield that has been implemented in Italy is fully automated, which prevents any transparency on the notified IP addresses and lacks checks and balances performed by third parties, who could verify whether the notified IP addresses are exclusively dedicated to piracy (and should be blocked) or not.

Piracy Shield isn’t working, and causes serious collateral damage, but instead of recognising this, its supporters have doubled down, and have just convinced the Italian parliament to pass amendments making it even worse, reported here by TorrentFreak:

VPN and DNS services anywhere on planet earth will be required to join Piracy Shield and start blocking pirate sites, most likely at their own expense, just like Italian ISPs are required to do already.

Moving forward, if pirate sites share an IP address with entirely innocent sites, and the innocent sites are outnumbered, ISPs, VPNs and DNS services will be legally required to block them all.

A new offence has been created that is aimed at service providers, including network access providers, who fail to report promptly illegal conduct by their users to the judicial authorities in Italy or the police there. Maximum punishment is not just a fine, but imprisonment for up to one year. Just why this is absurd is made clear by this LinkedIn comment by Diego Ciulli, Head of Government Affairs and Public Policy, Google Italy (translation by DeepL):

Under the label of ‘combating piracy’, the Senate yesterday approved a regulation obliging digital platforms to notify the judicial authorities of all copyright infringements – present, past and future – of which they become aware. Do you know how many there are in Google’s case? Currently, 9,756,931,770.

In short, the Senate is asking us to flood the judiciary with almost 10 billion URLs – and foresees jail time if we miss a single notification.

If the rule is not corrected, the risk is to do the opposite of the spirit of the law: flooding the judiciary, and taking resources away from the fight against piracy.

The new law will make running an Internet access service so risky that many will probably just give up, reducing consumer choice. Freedom of speech will be curtailed, online security weakened, and Italy’s digital infrastructure will be degraded. The end result of this law will be an overall impoverishment of Italian Internet users, Italian business, and the Italian economy. And all because of one industry’s obsession with policing copyright at all costs

Source: Italy is losing its mind because of copyright: it just made its awful Piracy Shield even worse – Walled Culture

23andMe is on the brink. What happens to all that genetic DNA data?

[…] The one-and-done nature of Wiles’ experience is indicative of a core business problem with the once high-flying biotech company that is now teetering on the brink of collapse. Wiles and many of 23andMe’s 15 million other customers never returned. They paid once for a saliva kit, then moved on.

Shares of 23andMe are now worth pennies. The company’s valuation has plummeted 99% from its $6 billion peak shortly after the company went public in 2021.

As 23andMe struggles for survival, customers like Wiles have one pressing question: What is the company’s plan for all the data it has collected since it was founded in 2006?

[…]

Andy Kill, a spokesperson for 23andMe, would not comment on what the company might do with its trove of genetic data beyond general pronouncements about its commitment to privacy.

[…]

When signing up for the service, about 80% of 23andMe’s customers have opted in to having their genetic data analyzed for medical research.

[…]

The company has an agreement with pharmaceutical giant GlaxoSmithKline, or GSK, that allows the drugmaker to tap the tech company’s customer data to develop new treatments for disease.

Anya Prince, a law professor at the University of Iowa’s College of Law who focuses on genetic privacy, said those worried about their sensitive DNA information may not realize just how few federal protections exist.

For instance, the Health Insurance Portability and Accountability Act, also known as HIPAA, does not apply to 23andMe since it is a company outside of the health care realm.

[…]

According to the company, all of its genetic data is anonymized, meaning there is no way for GSK, or any other third party, to connect the sample to a real person. That, however, could make it nearly impossible for a customer to renege on their decision to allow researchers to access their DNA data.

“I couldn’t go to GSK and say, ‘Hey, my sample was given to you — I want that taken out — if it was anonymized, right? Because they’re not going to re-identify it just to pull it out of the database,” Prince said.

[…]

the patchwork of state laws governing DNA data makes the generic data of millions potentially vulnerable to being sold off, or even mined by law enforcement.

“Having to rely on a private company’s terms of service or bottom line to protect that kind of information is troubling — particularly given the level of interest we’ve seen from government actors in accessing such information during criminal investigations,” Eidelman said.

She points to how investigators used a genealogy website to identify the man known as the Golden State Killer, and how police homed in on an Idaho murder suspect by turning to similar databases of genetic profiles.

“This has happened without people’s knowledge, much less their express consent,” Eidelman said.

[…]

Last year, the company was hit with a major data breach that it said affected 6.9 million customer accounts, including about 14,000 who had their passwords stolen.

[…]

Some analysts predict that 23andMe could go out of business by next year, barring a bankruptcy proceeding that could potentially restructure the company.

[…]

Source: What happens to all of 23andMe’s genetic DNA data? : NPR

For more fun reading about about this clusterfuck of a company and why giving away DNA data is a spectacularly bad idea:

License Plate Readers Are Creating a US-Wide Database of Cars – and political affiliation, planned parenthood and more

At 8:22 am on December 4 last year, a car traveling down a small residential road in Alabama used its license-plate-reading cameras to take photos of vehicles it passed. One image, which does not contain a vehicle or a license plate, shows a bright red “Trump” campaign sign placed in front of someone’s garage. In the background is a banner referencing Israel, a holly wreath, and a festive inflatable snowman.

Another image taken on a different day by a different vehicle shows a “Steelworkers for Harris-Walz” sign stuck in the lawn in front of someone’s home. A construction worker, with his face unblurred, is pictured near another Harris sign. Other photos show Trump and Biden (including “Fuck Biden”) bumper stickers on the back of trucks and cars across America.

[…]

These images were generated by AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers—all while recording the precise locations of these observations.

[…]

The detailed photographs all surfaced in search results produced by the systems of DRN Data, a license-plate-recognition (LPR) company owned by Motorola Solutions. The LPR system can be used by private investigators, repossession agents, and insurance companies; a related Motorola business, called Vigilant, gives cops access to the same LPR data.

[…]

those with access to the LPR system can search for common phrases or names, such as those of politicians, and be served with photographs where the search term is present, even if it is not displayed on license plates.

[…]

“I searched for the word ‘believe,’ and that is all lawn signs. There’s things just painted on planters on the side of the road, and then someone wearing a sweatshirt that says ‘Believe.’” Weist says. “I did a search for the word ‘lost,’ and it found the flyers that people put up for lost dogs and cats.”

Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people’s personal political views and their homes can be recorded into vast databases that can be queried.

[…]

Over more than a decade, DRN has amassed more than 15 billion “vehicle sightings” across the United States, and it claims in its marketing materials that it amasses more than 250 million sightings per month.

[…]

The system is partly fueled by DRN “affiliates” who install cameras in their vehicles, such as repossession trucks, and capture license plates as they drive around. Each vehicle can have up to four cameras attached to it, capturing images in all angles. These affiliates earn monthly bonuses and can also receive free cameras and search credits.

In 2022, Weist became a certified private investigator in New York State. In doing so, she unlocked the ability to access the vast array of surveillance software accessible to PIs. Weist could access DRN’s analytics system, DRNsights, as part of a package through investigations company IRBsearch. (After Weist published an op-ed detailing her work, IRBsearch conducted an audit of her account and discontinued it.

[…]

While not linked to license plate data, one law enforcement official in Ohio recently said people should “write down” the addresses of people who display yard signs supporting Vice President Kamala Harris, the 2024 Democratic presidential nominee, exemplifying how a searchable database of citizens’ political affiliations could be abused.

[…]

In 2022, WIRED revealed that hundreds of US Immigration and Customs Enforcement employees and contractors were investigated for abusing similar databases, including LPR systems. The alleged misconduct in both reports ranged from stalking and harassment to sharing information with criminals.

[…]

 

Source: License Plate Readers Are Creating a US-Wide Database of More Than Just Cars | WIRED

Insecure Robot Vacuums From Chinese Company Deebot Collect Photos and Audio to Train Their AI

Ecovacs robot vacuums, which have been found to suffer from critical cybersecurity flaws, are collecting photos, videos and voice recordings — taken inside customers’ houses — to train the company’s AI models.

The Chinese home robotics company, which sells a range of popular Deebot models in Australia, said its users are “willingly participating” in a product improvement program.

When users opt into this program through the Ecovacs smartphone app, they are not told what data will be collected, only that it will “help us strengthen the improvement of product functions and attached quality”. Users are instructed to click “above” to read the specifics, however there is no link available on that page.

Ecovacs’s privacy policy — available elsewhere in the app — allows for blanket collection of user data for research purposes, including:

– The 2D or 3D map of the user’s house generated by the device
– Voice recordings from the device’s microphone
— Photos or videos recorded by the device’s camera

“It also states that voice recordings, videos and photos that are deleted via the app may continue to be held and used by Ecovacs…”

Source: Insecure Robot Vacuums From Chinese Company Deebot Collect Photos and Audio to Train Their AI

Dutch oppose Hungary’s approach to EU child sexual abuse regulation – or total surveillance of every smart device

The Netherlands’ government and opposition are both against the latest version of the controversial EU regulation aimed at detecting online child sexual abuse material (CSAM), according to an official position and an open letter published on Tuesday (1 October).

The regulation, aimed at detecting online CSAM, has been criticised for potentially allowing the scanning of private messages on platforms such as WhatsApp or Gmail.

However, the latest compromise text, dated 9 September, limits detection to known material, among other changes. ‘Known’ material refers to content that has already been circulating and detected, in contrast to ‘new’ material that has not yet been identified.

The Hungarian presidency of the Council of the EU shared a partial general approach dated 24 September and seen by Euractiv, that mirrors the 9 September text but reduces the reevaluation period from five years to three for grooming and new CSAM.

Limiting detection to known material could hinder authorities’ ability to surveil massive amounts of communications, suggesting the change is likely an attempt to reconcile privacy concerns.

The Netherlands initially supported the proposal to limit detection to ‘known’ material but withdrew its support in early September, Euractiv reported.

On Tuesday (1 October), Amsterdam officially took a stance against the general approach, despite speculation last week suggesting the country might shift its position in favour of the regulation.

This is also despite the Dutch mostly maintaining that their primary concern lies with combating known CSAM – a focus that aligns with the scope of the latest proposal.

According to various statistics, the Netherlands hosts a significant amount of CSAM.

The Dutch had been considering supporting the proposal, or at least a “silent abstention” that might have weakened the blocking minority, signalling a shift since Friday (27 September), a source close to the matter told Euractiv.

While a change in the Netherlands’ stance could have affected the blocking minority in the EU Council, their current position now strengthens it.

If the draft law were to pass in the EU Council, the next stage would be interinstitutional negotiations, called trilogues, between the European Parliament, the Council of the EU, and the Commission to finalise the legislation.

Both the Dutch government and the opposition are against supporting the new partial general approach.

Opposition party GroenLinks-PvdA (Greens/EFA) published an open letter, also on Tuesday, backed by a coalition of national and EU-based private and non-profit organisations, urging the government to vote against the proposal.

According to the letter, the regulation will be discussed at the Justice and Home Affairs Council on 11 October, with positions coordinated among member states on 2 October.

Currently, an interim regulation allows companies to detect and report online CSAM voluntarily. Originally set to expire in 2024, this measure has been extended to 2026 to avoid a legislative gap, as the draft for a permanent law has yet to be agreed.

The Dutch Secret Service opposed the draft regulation because “introducing a scan application on every mobile phone” with infrastructure to manage the scans would be a complex and extensive system that would introduce risks to digital resilience, according to a decision note.

Source: Dutch oppose Hungary’s approach to EU child sexual abuse regulation – Euractiv

To find out more about how invasive the proposed scanning feature is, look through the articles here: https://www.linkielist.com/?s=csam

Mazda’s $10 Subscription For Remote Start Sparks Backlash After Killing Open Source Option

Mazda recently surprised customers by requiring them to sign up for a subscription in order to keep certain services. Now, notable right-to-repair advocate Louis Rossmann is calling out the brand. He points to several moves by Mazda as reasons for his anger toward them. However, it turns out that customers might still have a workaround.

Previously, the Japanese carmaker offered connected services, that included several features such as remote start, without the need for a subscription. At the time, the company informed customers that these services would eventually transition to a paid model.

More: Native Google Maps Won’t Work On New GM Cars Without $300 Subscription

It’s important to clarify that there are two very different types of remote start we’re talking about here. The first type is the one many people are familiar with where you use the key fob to start the vehicle. The second method involves using another device like a smartphone to start the car. In the latter, connected services do the heavy lifting.

Transition to paid services

What is wild is that Mazda used to offer the first option on the fob. Now, it only offers the second kind, where one starts the car via phone through its connected services for a $10 monthly subscription, which comes to $120 a year. Rossmann points out that one individual, Brandon Rorthweiler, developed a workaround in 2023 to enable remote start without Mazda’s subscription fees.

However, according to Ars Technica, Mazda filed a DMCA takedown notice to kill that open-source project. The company claimed it contained code that violated “[Mazda’s] copyright ownership” and used “certain Mazda information, including proprietary API information.” Additionally, Mazda argued that the project included code providing functionality identical to that found in its official apps available on the Apple App Store and Google Play Store.

That doesn’t mean an aftermarket remote starter kit won’t work though. In fact, with Mazda’s subscription model now in place, it’s not hard to imagine customers flocking to aftermarket solutions to avoid the extra fees. However, by not opting to pay for Mazda Connected Services, owners will also miss out on things like vehicle health reports, remote keyless entry, and vehicle status reports.

A growing trend

Bear in mind that this is just one case of an automaker trying to milk their customers with subscription-based features, which could net them millions in extra income. BMW, for example, installs adaptive suspension hardware in some vehicles but charges $27.50 per month (or $505 for a one-time purchase) to unlock the software that makes the suspension actually work.

And then there’s Ferrari’s plan to offer a battery subscription for extended warranty coverage on its hybrid models for a measly $7,500 per year!

[…]

sure, you might have paid a considerable amount of money to buy your car, and it might legally be yours, but that does not ensure that you really own all of the features it comes with, unless you’re prepared to pay extra.

Source: Mazda’s $10 Subscription For Remote Start Sparks Backlash After Killing Open Source Option | Carscoops

LG Wants to Show You Ads Even When You’re Not Watching TV

The outlet reveals (via Android Authority) that the ads start playing before the screensaver hits the screen and are usually sponsored messages from LG or its partners. The review highlighted one specific ad for the LG Channels app: LG’s free live TV service with ads. FlatpanelsHD adds that according to LG’s ad division, users will soon start seeing ads for other products and services.

The review mentions that “some of the ads” can be disabled, and there’s also an option under ‘Additional Settings’ to disable screensaver ads. But it’s almost sinful to push ads on a $2,400 device.

What makes this whole thing more bizarre is that, according to the review, LG pushes the same ads with the same frequency on its cheaper offerings. Oddly, it does nothing to differentiate the experience of purchasing a high-end model from an entry-level one. The brand’s OLED line is already pricey, but the G4 is allegedly “one of the most expensive TVs on the market,” according to FlatpanelsHD. I can only imagine how this will play out for the South Korean company. As FlatpanelsHD said, “LG must reconsider this strategy if they want to sell high-end TVs.”

Source: LG Wants to Show You Ads Even When You’re Not Watching TV

Unbelievable this