The Linkielist

Linking ideas with the world

The Linkielist

Venezuela’s Internet Censorship Sparks Surge in VPN Demand

What’s Important to Know:

  • Venezuela’s Supreme Court fined TikTok USD$10 million for failing to prevent viral video challenges that resulted in the deaths of three Venezuelan children.
  • TikTok faced temporary blockades by Internet Service Providers (ISPs) in Venezuela for not paying the fine.
  • ISPs used IP, HTTP, and DNS blocks to restrict access to TikTok and other platforms in early January 2025.
  • While this latest round of blockades was taking place, protests against Nicolás Maduro’s attempt to retain the presidency of Venezuela were happening across the country. The riot police were deployed in all major cities, looking to quell any protesters.
  • A significant surge in demand for VPN services has been observed in Venezuela since the beginning of 2025. Access to some VPN providers’ websites has also been restricted in the country.

In November 2024, Nicolás Maduro announced that two children had died after participating in challenges on TikTok. After a third death was announced by Education Minister Héctor Rodriguez, Venezuela’s Supreme Court issued a $10 million fine against the social media platform for failing to implement measures to prevent such incidents.

The court also ordered TikTok to open an office in Venezuela to oversee content compliance with local laws, giving the platform eight days to comply and pay the fine. TikTok failed to meet the court’s deadline to pay the fine or open an office in the country. As a result, ISPs in Venezuela, including CANTV — the state’s internet provider — temporarily blocked access to TikTok.

The blockades happened on January 7 and later on January 8, lasting several hours each. According to Netblocks.org, various methods were used to restrict access to TikTok, including IP, HTTP, and DNS blocks.

This screenshot shows Netblocks.org report, indicating zero reachability on TikTok using different Venezuelan ISPs.

On January 9, under orders of CONATEL (Venezuela’s telecommunications regulator), CANTV and other private ISPs in the country implemented further blockades to restrict access to TikTok. For instance, they blocked 21 VPN providers along with 33 public DNS services as reported by VeSinFiltro.org.

[…]

vpnMentor’s Research Team first observed a significant surge in the demand for VPN services in the country back in 2024, when X was first blocked. Since then, VPN usage has continued to rise in Venezuela, reaching another remarkable surge in the beginning of 2025. VPN demand grew over 200% only from January 7th to the 8th, totaling a 328% growth from January 1st to January 8th. This upward trend shows signs of further growth according to partial data from January 9th.

The increased demand for VPN services indicates a growing interest in circumventing censorship and accessing restricted content online. This trend suggests that Venezuelan citizens are actively seeking ways to bypass government-imposed restrictions on social media platforms and maintain access to a free flow of information.

[…]

Other Recent VPN Demand Growths

Online platforms are no strangers to geoblocks in different parts of the world. In fact, there have been cases where platforms themselves impose location-based access restrictions to users. For instance, Aylo/Pornhub previously geo-blocked 17 US states in response to age-verification laws that the adult site deemed unjust.

vpnMentor’s Research Team recently published a report about a staggering 1,150% VPN demand surge in Florida following the IP-block of Pornhub in the state.

Source: Venezuela’s Internet Censorship Sparks Surge in VPN Demand

VPN Demand Surge in Florida after Adult Sites Age Restriction Kicks In

What’s important to know:

  • On March 25, 2024 Florida’s Gov. Ron DeSantis signed a law requiring age verification for accessing pornographic sites. This law, known as House Bill 3 (HB3), passed with bipartisan support and has caused quite a stir in the online community.
  • HB3 was set to come into effect on January 1, 2025. It allows hefty fines of up to $50,000 for websites that fail to comply with the regulations.
  • In response to this new legislation, Aylo, the parent company of Pornhub confirmed on December 18, 2024 that it will deny access for all users geo-located in the state as a form of protest to the new age verification requirements imposed by a state law.
  • Pornhub, which registered 3 billion visits from the United States in January 2024, had previously imposed access restrictions in Kentucky, Indiana, Idaho, Kansas, Nebraska, Texas, North Carolina, Montana, Mississippi, Virginia, Arkansas, and Utah. This makes Florida the 13th state without access to their website.

The interesting development following Aylo’s geo-block on Florida IP addresses is the dramatic increase in the demand for Virtual Private Network (VPN) services in the state. A VPN allows users to mask their IP addresses and encrypt their internet traffic, providing an added layer of privacy and security while browsing online.

The vpnMentor Research Team observed a significant surge in VPN usage across the state of Florida, with a staggering increase noted in the first hours of January 1st increasing consistently since the last minutes of 2024 and reaching its peak of 1150% only four hours after the HB3 law came into effect.
Additionally, there was a noteworthy 51% spike in demand for VPN services in the state on December 19, 2024, the day after Aylo released their statement of geo-blocking Florida IP addresses to access their website.

Florida’s new law on pornographic websites and the consequent rise of VPN usage emphasize the intricate interplay between technology, privacy, and regulatory frameworks. With laws pertaining to online activities constantly changing, it is imperative for users and website operators alike to remain knowledgeable about regulations and ensure compliance.

Past VPN Demand Growths

Aylo/Pornhub has previously geo-blocked 12 states all of which have enforced age-verification laws that the adult site deemed unjust.

In May 2023, Pornhub’s banning of Utah-based users caused a 967% spike in VPN demand in the state and last year, the passing of adult-site-related age restriction laws in Texas caused a surge in demand of 234.8% in the state.

Source: VPN Demand Surge in Florida after Adult Sites Age Restriction Kicks In

Google brings back digital fingerprinting to track users for advertising

Google is tracking your online behavior in the name of advertising, reintroducing a data collection process that ingests all of your online signals (from IP address to complex browser information) and pinpoints unique users or devices, also known as “digital fingerprinting.”

The company’s updated platform program policies include relaxed restrictions on advertisers and personalized ad targeting across a range of devices, an outcome of a larger “advertising ecosystem shift” and the advancement of privacy-enhancing technologies (PETs) like on-device processing and trusted execution environments, in the words of the company.

A departure from its longstanding pledge to user choice and privacy, Google argues these technologies offer enough protection for users while also creating “new ways for brands to manage and activate their data safely and securely.” The new feature will be available to advertisers beginning Feb. 16, 2025.

[…]

Contrary to other data collection tools like cookies, digital fingerprinting is difficult to spot, and thus even harder for even privacy-conscious users to erase or block. On Dec. 19, the UK’s Information Commissioner’s Office (ICO) — a data protection and privacy regulator — labeled Google “irresponsible” for the policy change, saying the shift to fingerprinting is an unfair means of tracking users, reducing choice and control over their personal information. The watchdog also warned that the move could encourage riskier advertiser behavior.

“Google itself has previously said that fingerprinting does not meet users’ expectations for privacy, as users cannot easily consent to it as they would cookies. This in turn means they cannot control how their information is collected. To quote Google’s own position on fingerprinting from 2019: ‘We think this subverts user choice and is wrong,'” wrote ICO executive director of regulatory risk Stephen Almond.

The ICO warned that it will intervene if Google cannot demonstrate existing legal requirements for such tech, including options to secure freely-given consent, ensure fair processing, and uphold the right to erasure: “Businesses should not consider fingerprinting a simple solution to the loss of third-party cookies and other cross-site tracking signals.”

Source: Google brings back digital fingerprinting to track users for advertising | Mashable

Telegram hands over data on 2253 users last year (up from 108 in 2023) to US law enforcement alone after arrest of boss

Telegram reveals that the communications platform has fulfilled 900 U.S. government requests, sharing the phone number or IP address information of 2,253 users with law enforcement.

This number is a steep increase from previous years, with most requests processed after the platform’s policy shift on sharing user data, announced in September 2024.

While Telegram has long been a platform used to communicate with friends and family, talk with like-minded peers, and as a way to bypass government censorship, it is also heavily used for cybercrime.

Threat actors commonly utilize the platform to sell illegal services, conduct attacks, sell stolen data, or as a command and control server for their malware.

As first reported by 404 Media, the new information on fulfilled law enforcement requests comes from the Telegram Transparency Report for the period between 1/1/24 and 12/13/24.

Previously, Telegram would only share users’ IP addresses and phone numbers in cases of terrorism and had only fulfilled 14 requests affecting 108 users until September 30, 2024.

Current numbers (left) and previous period figures (right)
Current numbers (left) and previous period figures (right)
Source: BleepingComputer

Following the change in its privacy policy, Telegram will now share user data with law enforcement in other cases of crime, including cybercrime, the selling of illegal goods, and online fraud.

[…]

This change came in response to pressure from the authorities, culminating in the arrest of Telegram’s founder and CEO, Pavel Durov, in late August in France.

Durov subsequently faced a long list of charges, including complicity in cybercrime, organized fraud, and distribution of illegal material, as well as refusal to facilitate lawful interceptions aimed at aiding crime investigations.

[…]

To access Telegram transparency reports for your country, use the platform’s dedicated bot from here.

Source: Telegram hands over data on thousands of users to US law enforcement

That’s one way to get what you want – make up spurious charges, arrest someone and hold them for as long as it takes for you to get what you want without having to actually prove you can legally get at it. If it wasn’t the government doing it this would be called kidnapping and extortion.

Google goes to court for collecting data on users who opted out… again…

A federal judge this week rejected Google’s motion to throw out a class-action lawsuit alleging that it invaded the privacy of users who opted out of functionality that records a users’ web and app activities. A jury trial is scheduled for August 2025 in US District Court in San Francisco.

The lawsuit concerns Google’s Web & App Activity (WAA) settings, with the lead plaintiff representing two subclasses of people with Android and non-Android phones who opted out of tracking. “The WAA button is a Google account setting that purports to give users privacy control of Google’s data logging of the user’s web app and activity, such as a user’s searches and activity from other Google services, information associated with the user’s activity, and information about the user’s location and device,” wrote US District Judge Richard Seeborg, the chief judge in the Northern District Of California.

Google says that Web & App Activity “saves your activity on Google sites and apps, including associated info like location, to give you faster searches, better recommendations, and more personalized experiences in Maps, Search, and other Google services.” Google also has a supplemental Web App and Activity setting that the judge’s ruling refers to as “(s)WAA.”

“The (s)WAA button, which can only be switched on if WAA is also switched on, governs information regarding a user’s ‘[Google] Chrome history and activity from sites, apps, and devices that use Google services.’ Disabling WAA also disables the (s)WAA button,” Seeborg wrote.

Google sends data to developers

But data is still sent to third-party app developers through the Google Analytics for Firebase (GA4F), “a free analytical tool that takes user data from the Firebase kit and provides app developers with insight on app usage and user engagement,” the ruling said. GA4F “is integrated in 60 percent of the top apps” and “works by automatically sending to Google a user’s ad interactions and certain identifiers regardless of a user’s (s)WAA settings, and Google will, in turn, provide analysis of that data back to the app developer.”

Plaintiffs have brought claims of privacy invasion under California law. Plaintiffs “present evidence that their data has economic value,” and “a reasonable juror could find that Plaintiffs suffered damage or loss because Google profited from the misappropriation of their data,” Seeborg wrote.

[…]

In a proposed settlement of a different lawsuit, Google last year agreed to delete records reflecting users’ private browsing activities in Chrome’s Incognito mode.

[…]

Google contends that its system is harmless to users. “Google argues that its sole purpose for collecting (s)WAA-off data is to provide these analytic services to app developers. This data, per Google, consists only of non-personally identifiable information and is unrelated (or, at least, not directly related) to any profit-making objectives,” Seeborg wrote.

On the other side, plaintiffs say that Google’s tracking contradicts its “representations to users because it gathers exactly the data Google denies saving and collecting about (s)WAA-off users,” Seeborg wrote. “Moreover, Plaintiffs insist that Google’s practices allow it to personalize ads by linking user ad interactions to any later related behavior—information advertisers are likely to find valuable—leading to Google’s lucrative advertising enterprise built, in part, on (s)WAA-off data unlawfully retrieved.”

[…]

Google, as the judge writes, purports to treat user data as pseudonymous by creating a randomly generated identifier that “permits Google to recognize the particular device and its later ad-related behavior… Google insists that it has created technical barriers to ensure, for (s)WAA-off users, that pseudonymous data is delinked to a user’s identity by first performing a ‘consent check’ to determine a user’s (s)WAA settings.”

Whether this counts as personal information under the law is a question for a jury, the judge wrote. Seeborg pointed to California law that defines personal information to include data that “is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” Given the legal definition, “a reasonable juror could view the (s)WAA-off data Google collected via GA4F, including a user’s unique device identifiers, as comprising a user’s personal information,” he wrote.

[…]

Source: Google loses in court, faces trial for collecting data on users who opted out – Ars Technica

Siri “unintentionally” recorded private convos on phone, watch, then sold them to advertisers; yes those ads are very targeted Apple agrees to pay $95M, laughs to the bank

Apple has agreed to pay $95 million to settle a lawsuit alleging that its voice assistant Siri routinely recorded private conversations that were then shared with third parties and used for targeted ads.

In the proposed class-action settlement—which comes after five years of litigation—Apple admitted to no wrongdoing. Instead, the settlement refers to “unintentional” Siri activations that occurred after the “Hey, Siri” feature was introduced in 2014, where recordings were apparently prompted without users ever saying the trigger words, “Hey, Siri.”

Sometimes Siri would be inadvertently activated, a whistleblower told The Guardian, when an Apple Watch was raised and speech was detected. The only clue that users seemingly had of Siri’s alleged spying was eerily accurate targeted ads that appeared after they had just been talking about specific items like Air Jordans or brands like Olive Garden, Reuters noted (claims which remain disputed).

[…]

It’s currently unknown how many customers were affected, but if the settlement is approved, the tech giant has offered up to $20 per Siri-enabled device for any customers who made purchases between September 17, 2014, and December 31, 2024. That includes iPhones, iPads, Apple Watches, MacBooks, HomePods, iPod touches, and Apple TVs, the settlement agreement noted. Each customer can submit claims for up to five devices.

A hearing when the settlement could be approved is currently scheduled for February 14. If the settlement is certified, Apple will send notices to all affected customers. Through the settlement, customers can not only get monetary relief but also ensure that their private phone calls are permanently deleted.

While the settlement appears to be a victory for Apple users after months of mediation, it potentially lets Apple off the hook pretty cheaply. If the court had certified the class action and Apple users had won, Apple could’ve been fined more than $1.5 billion under the Wiretap Act alone, court filings showed.

But lawyers representing Apple users decided to settle, partly because data privacy law is still a “developing area of law imposing inherent risks that a new decision could shift the legal landscape as to the certifiability of a class, liability, and damages,” the motion to approve the settlement agreement said. It was also possible that the class size could be significantly narrowed through ongoing litigation, if the court determined that Apple users had to prove their calls had been recorded through an incidental Siri activation—potentially reducing recoverable damages for everyone.

“The percentage of those who experienced an unintended Siri activation is not known,” the motion said. “Although it is difficult to estimate what a jury would award, and what claims or class(es) would proceed to trial, the Settlement reflects approximately 10–15 percent of Plaintiffs expected recoverable damages.”

Siri’s unintentional recordings were initially exposed by The Guardian in 2019, plaintiffs’ complaint said. That’s when a whistleblower alleged that “there have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details, and app data.”

[…]

Meanwhile, Google faces a similar lawsuit in the same district from plaintiffs represented by the same firms over its voice assistant, Reuters noted. A win in that suit could affect anyone who purchased “Google’s own smart home speakers, Google Home, Home Mini, and Home Max; smart displays, Google Nest Hub, and Nest Hub Max; and its Pixel smartphones” from approximately May 18, 2016 to today, a December court filing noted. That litigation likely won’t be settled until this fall.

Source: Siri “unintentionally” recorded private convos; Apple agrees to pay $95M – Ars Technica

PayPal Honey extension to find deals instead hides discounts and reroutes commissions from promoters

PayPal-owned browser extension Honey manipulates affiliate marketing systems and withholds discount information from users, according to an investigation by YouTube channel MegaLag.

The extension — which rose in popularity after promising consumers it would find them the best online deals — replaces existing affiliate cookies with its own during checkout, diverting commission payments from content creators who promoted the products to PayPal, MegaLag reported in a 23-minute video [YouTube link].

The investigation revealed that Honey, which PayPal acquired in 2019 for $4 billion, allows merchants in its cashback program to control which coupons appear to users, hiding better publicly available discounts.

Source: PayPal’s Honey Accused of Misleading Users, Hiding Discounts

Hundreds of websites to shut down under UK’s ‘chilling’ internet laws

Hundreds of websites will be shut down on the day that Britain’s Online Safety Act comes into effect, in what are believed to be the first casualties of the new internet laws.

Microcosm, a web forum hosting service that runs 300 sites including cycling forums and local community hubs, said that the sites would go offline on March 16, the day that Ofcom starts enforcing the Act.

Its owner said they were unable to comply with the lengthy requirements of the Act, which created a “disproportionately high personal liability”.

The new laws, which were designed to crack down on illegal content and protect children, threaten fines of up to £18m or 10pc of revenue for sites that fail to comply with the laws.

On Monday, Ofcom set out more than 40 measures that it expects online services to follow by March, such as carrying out risk assessments about their sites and naming senior people accountable for ensuring safety.

Microcosm, which has hosted websites including cycling forum LFGSS since 2007, is run as a non-profit funded by donations and largely relies on users to follow community guidelines. Its sites attract a combined 250,000 users.

Dee Kitchen, who operates the service and moderates its 300 sites, said: “What this is, is a chilling effect [on small sites].

“For the really small sites and the charitable sites and the local sports club there’s no carve-out for anything.

“It feels like a huge risk, and it feels like it can be so easily weaponised by angry people who are the subject of moderation.

“It’s too vague and too broad and I don’t want to take that personal risk.”

Announcing the shutdown on the LFGSS forum, they said: “It’s devastating to just … turn it off … but this is what the Act forces a sole individual running so many social websites for a public good to do.”

[…]

Source: Hundreds of websites to shut down under UK’s ‘chilling’ internet laws

Android will let you find unknown Bluetooth trackers instead of just warning you about them

The advent of Bluetooth trackers has made it a lot easier to find your bag or keys when they’re lost, but it has also put inconspicuous tracking tools in the hands of people who might misuse them. Apple and Google have both implemented tracker alerts to let you know if there’s an unknown Bluetooth tracker nearby, and now as part of a new update, Google is letting Android users actually locate those trackers, too.

The feature is one of two new tools Google is adding to Find My Device-compatible trackers. The first, “Temporarily Pause Location” is what you’re supposed to enable when you first receive an unknown tracker notification. It blocks your phone from updating its location with trackers for 24 hours. The second, “Find Nearby,” helps you pinpoint where the tracker is if you can’t see it or easily hear it.

By clicking on an unknown tracker notification you’ll be able to see a map of where the tracker was last spotted moving with you. From there, you can play a sound to see if you can locate it (Google says the owner won’t be notified). If you can’t find it, Find Nearby will connect your phone to the tracker over Bluetooth and display a shape that fills in the closer you get to it.

The Find Nearby button and interface from Google's Find My Device network.
Google / Engadget

The tool is identical to what Google offers for locating trackers and devices you actually own, but importantly, you don’t need to use Find My Device or have your own tracker to benefit. Like Google’s original notifications feature, any device running Android 6.0 and up can deal with unknown Bluetooth trackers safely.

Expanding Find Nearby seems like the final step Google needed to take to tamp down Bluetooth tracker misuse, something Apple already does with its Precision Finding tool for AirTags. The companies released a shared standard for spotting unknown Bluetooth trackers regardless of whether you use Android or iOS in May 2024, following the launch of Google’s Find My Device network in April. Both Google and Apple offered their own methods of dealing with unknown trackers before then to prevent trackers from being used for everything from robbery to stalking.

Source: Android will let you find unknown Bluetooth trackers instead of just warning you about them

300 Artists Back Internet Archive in $621 Million Copyright Attack from Record Labels – over music older than the 1950s

[…]300-plus musicians who have signed an open letter supporting the Internet Archive as it faces a $621 million copyright infringement lawsuit over its efforts to preserve 78 rpm records.

The letter, spearheaded by the digital advocacy group Fight for the Future, states that the signatories “wholeheartedly oppose” the lawsuit, which they suggest benefits “shareholder profits” more than actual artists. It continues: “We don’t believe that the Internet Archive should be destroyed in our name. The biggest players of our industry clearly need better ideas for supporting us, the artists, and in this letter we are offering them.”

[…]

(The full letter, and a list of signatories, is here.)

The lawsuit was brought last year by several major music rights holders, led by Universal Music Group and Sony Music. They claimed the Internet Archive’s Great 78 Project — an unprecedented effort to digitize hundreds of thousands of obsolete shellac discs produced between the 1890s and early 1950s — constituted the “wholesale theft of generations of music,” with “preservation and research” used as a “smokescreen.” (The Archive has denied the claims.)

While more than 400,000 recordings have been digitized and made available to listen to on the Great 78 Project, the lawsuit focuses on about 4,000, most by recognizable legacy acts like Billie Holiday, Frank Sinatra, Elvis Presley, and Ella Fitzgerald. With the maximum penalty for statutory damages at $150,000 per infringing incident, the lawsuit has a potential price tag of over $621 million. A broad enough judgement could end the Internet Archive.

Supporters of the suit — including the estates of many of the legacy artists whose recordings are involved — claim the Archive is doing nothing more than reproducing and distributing copyrighted works, making it a clear-cut case of infringement. The Archive, meanwhile, has always billed itself as a research library (albeit a digital one), and its supporters see the suit (as well as a similar one brought by book publishers) as an attack on preservation efforts, as well as public access to the cultural record.

[…]

“Musicians are struggling, but libraries like the Internet Archive are not our problem! Corporations like Spotify, Apple, Live Nation and Ticketmaster are our problem. If labels really wanted to help musicians, they would be working to raise streaming rates. This lawsuit is just another profit-grab.”

Tommy Cappel, who co-founded the group Beats Antique, says the Archive is “hugely valued in the music community” for its preservation of everything from rare recordings to live sets. “This is important work that deserves to continue for generations to come, and we don’t want to see everything they’ve already done for musicians and our legacy erased,” he added. “Major labels could see all musicians, past and present, as partners — instead of being the bad guy in this dynamic. They should drop their suit. Archives keep us alive.”

Rather than suing the Archive, Fight for the Future’s letter calls on labels, streaming services, ticketing outlets, and venues to align on different goals. At the top of the list is boosting preservation efforts by partnering with “valuable cultural stewards like the Internet Archive.” They also call for greater investment in working musicians through more transparency in in ticketing practices, an end to venue merch cuts, and fair streaming compensation.

[…]

Source: Kathleen Hanna, Tegan and Sara, More Back Internet Archive in $621 Million Copyright Fight

How is it possible that there is still income generated from something released in the 1950s to people who had absolutely nothing to do with the creation and don’t put in any effort whatsoever to put out the content?

Why Italy’s Piracy Shield destroys huge internet companies and small businesses with no recourse (unless you are rich) and can lay out the entire internet in Italy to… protect against football streaming?!

Walled Culture has been following the sorry saga of Italy’s automated blocking system Piracy Shield for a year now. Blocklists are drawn up by copyright companies, without any review, or the possibility of any objections, and those blocks must be enforced within 30 minutes. Needless to say, such a ham-fisted and biased approach to copyright infringement is already producing some horrendous blunders.

For example, back in March Walled Culture reported that one of Cloudflare’s Internet addresses had been blocked by Piracy Shield. There were over 40 million domains associated with the blocked address – which shows how this crude approach can cause significant collateral damage to millions of sites not involved in any alleged copyright infringement.

Every new system has teething troubles, although not normally on this scale. But any hope that Italy’s national telecoms regulator, Autorità per le Garanzie nelle Comunicazioni (Authority for Communications Guarantees, AGCOM), the body running Piracy Shield, would have learned from the Cloudflare fiasco in order to stop it happening again was dispelled by what took place in October. TorrentFreak explains:

After blocking Cloudflare to prevent IPTV piracy just a few months ago, on Saturday the rightsholders behind Piracy Shield ordered Italy’s ISPs to block Google Drive. The subsequent nationwide blackout, affecting millions of Italians, wasn’t just a hapless IP address blunder. This was the reckless blocking of a Google.com subdomain that many 10-year-olds could identify as being important. Reckless people and internet infrastructure, what could possibly go wrong next?

The following day, there was a public discussion online involving the current and former AGCOM Commissioners, as well as various experts in relevant areas. The current AGCOM Commissioner Capitanio showed no sense of remorse for what happened. According to TorrentFreak’s report on the discussion:

Capitanio’s own focus on blocking to protect football was absolute. There was no concern expressed towards Google or the millions of users affected by the extended blackout, only defense of the Piracy Shield system.

Moreover:

AGCOM’s chief then went on to complain about Google’s refusal to delete Android apps already installed on users devices and other measures AGCOM regularly demands, none of which are required by law.

It seems that Capitanio regards even the current, one-sided and extreme Piracy Shield as too weak, and was trying to persuade Google to go even further than the law required – a typical copyright maximalist attitude. But worse was to come. Another participant in the discussion, former member of the Italian parliament, IT expert, and founder of Rialto Venture Capital, Stefano Quintarelli, pointed out a deeply worrying possibility:

the inherent insecurity of the Piracy Shield platform introduces a “huge systemic vulnerability” that eclipses the fight against piracy. Italy now has a system in place designed to dramatically disrupt internet communications and since no system is entirely secure, what happens if a bad actor somehow gains control?

Quintarelli says that if the Piracy Shield platform were to be infiltrated and maliciously exploited, essential services like hospitals, transportation systems, government functions, and critical infrastructure would be exposed to catastrophic blocking.

In other words, by placing the sanctity of copyright above all else, the Piracy Shield system could be turned against any aspect of Italian society with just a few keyboard commands. A malicious actor that managed to gain access to a system that has twice demonstrated a complete lack of even the most basic controls and checks could wreak havoc on computers and networks throughout Italy in a few seconds. Moreover, the damage could easily go well beyond the inconvenience of millions of people being blocked from accessing their files on Google Drive. A skilled intruder could carry out widespread sabotage of vital services and infrastructure that would cost billions of euros to rectify, and could even lead to the loss of lives.

No wonder, then, that an AGCOM board member, Elisa Giomi, has gone public with her concerns about the system. Giomi’s detailed rundown of Piracy Shield’s long-standing problems was posted in Italian on LinkedIn; TorrentFreak has a translation, and summarises the current situation as follows:

Despite a series of failures concerning Italy’s IPTV blocking platform Piracy Shield and the revelation that the ‘free’ platform will cost €2m per year, telecoms regulator AGCOM insists that all is going to plan. After breaking ranks, AGCOM board member Elisa Giomi called for the suspension of Piracy Shield while decrying its toll on public resources. When she was warned for her criticism, coupled with a threat of financial implications, Giomi came out fighting.

It’s clear that the Piracy Shield tragedy is far from over. It’s good to see courageous figures like Giomi joining the chorus of disapproval.

Source: Why Italy’s Piracy Shield risks moving from tiresome digital farce to serious national tragedy – Walled Culture

Italy, copyright – retarded doesn’t even begin to describe it.

Police bust pirate streaming service making €250 million per month: doesn’t this show the TV market is hugely broken?

An international law enforcement operation has dismantled a pirate streaming service that served over 22 million users worldwide and made €250 million ($263M) per month.

Italy’s Postal and Cybersecurity Police Service announced the action, codenamed “Taken Down,” stating they worked with Eurojust, Europol, and many other European countries, making this the largest takedown of its kind in Italy and internationally.

“More than 270 Postal Police officers, in collaboration with foreign law enforcement, carried out 89 searches in 15 Italian regions and 14 additional searches in the United Kingdom, the Netherlands, Sweden, Switzerland, Romania, Croatia, and China, involving 102 individuals,” reads the announcement.

“As part of the investigative framework initiated by the Catania Prosecutor’s Office and the Italian Postal Police, and with international cooperation, the Croatian police executed 11 arrest warrants against suspects.”

“Additionally, three high-ranking administrators of the IT network were identified in England and the Netherlands, along with 80 streaming control panels for IPTV channels managed by suspects throughout Italy,” mentions the police in the same announcement.

The pirated TV and content streaming service was operated by a hierarchical, transnational organization that illegally captured and resold the content of popular content platforms.

The copyrighted content included redistributed IPTV, live broadcasts, and on-demand content from major broadcasters like Sky, Dazn, Mediaset, Amazon Prime, Netflix, Disney+, and Paramount.

The police say that these illegal streams were made accessible through numerous live-streaming websites but have not published any domains.

It is estimated that the amount of financial damages suffered annually from the illegal service is a massive €10 billion ($10.5B).

These broadcasts were resold to 22 million subscribed members via multiple distribution channels and an extensive seller network.

As a result of operation “Taken Down,” the authorities seized over 2,500 illegal channels and their servers, including nine servers in Romania and Hong Kong.

[…]

Source: Police bust pirate streaming service making €250 million per month

Bad licensing decisions by TV stations and broadcasters have given these streamers a product that people apparently really really want and are willing to pay for.

Don’t shut down the streamers, shut down the system that makes this kind of product impossible to get.

BBC Gives Away huge Sound Effects Library, with readable and sensible terms of use

BBC Sound Effects website top

Terms for using our content

A few rules to stop you (and us) getting in trouble.

a) Don’t mess with our content
What do we mean by that? This sort of thing:

  • Removing or altering BBC logos, and copyright notices from the content (if there are any)
  • Not removing content from your device or systems when we ask you to. This might happen when we take down content either temporarily or permanently, which we can do at any time, without notice.
b) Don’t use our content for harmful or offensive purposes
Here’s a list of things that may harm or offend:

  • Insulting, misleading, discriminating or defaming (damaging people’s reputations)
  • Promoting pornography, tobacco or weapons
  • Putting children at risk
  • Anything illegal. Like using hate speech, inciting terrorism or breaking privacy law
  • Anything that would harm the BBC’s reputation
  • Using our content for political or social campaigning purposes or for fundraising.
c) Don’t make it look like our content costs money

If you put our content on a site that charges for content, you have to say it is free-to-view.

d) Don’t make our content more prominent than non-BBC content

Otherwise it might look like we’re endorsing you. Which we’re not allowed to do.

Also, use our content alongside other stuff (e.g. your own editorial text). You can’t make a service of your own that contains only our content.

Speaking of which…

e) Don’t exaggerate your relationship with the BBC

You can’t say we endorse, promote, supply or approve of you.

And you can’t say you have exclusive access to our content.

f) Don’t associate our content with advertising or sponsorship
That means you can’t:

  • Put any other content between the link to our content and the content itself. So no ads or short videos people have to sit through
  • Put ads next to or over it
  • Put any ads in a web page or app that contain mostly our content
  • Put ads related to their subject alongside our content. So no trainer ads with an image of shoes
  • Add extra content that means you’d earn money from our content.
g) Don’t be misleading about where our content came from

You can’t remove or alter the copyright notice, or imply that someone else made it.

h) Don’t pretend to be the BBC
That includes:

  • Using our brands, trade marks or logos without our permission
  • Using or mentioning our content in press releases and other marketing materials
  • Making money from our content. You can’t charge people to view our images, for example
  • Sharing our content. For example, no uploading to social media sites. Sharing links is OK.

Source: Licensing | BBC Sound Effects

This is how licenses should be written. Well done, BBC.

Epic Allows Internet Archive To Distribute For Free ‘Unreal’ & ‘Unreal Tournament’ Forever

One of the most frustrating aspects in the ongoing conversation around the preservation of older video games, also known as cultural output, is the collision of IP rights and some publishers’ unwillingness to both continue to support and make available these older games and their refusal to release those same games into the public domain so that others can do so. It creates this crazy situation in which a company insists on retaining its copyrights over a video game that it has effectively disappeared with no good or legitimate way for the public to preserve them. As I’ve argued for some time now, this breaks the copyright contract with the public and should come with repercussions. The whole bargain that is copyright law is that creative works are granted a limited monopoly on the production of that work, with that work eventually arriving into the public domain. If that arrival is not allowed to occur, the bargain is broken, and not by anyone who would supposedly “infringe” on the copyright of that work.

[…]

But it just doesn’t have to be like this. Companies could be willing to give up their iron-fisted control over their IP for these older games they aren’t willing to support or preserve themselves and let others do it for them. And if you need a real world example of that, you need look only at how Epic is working with The Internet Archive to do exactly that.

Epic, now primarily known for Fortnite and the Unreal Engine, has given permission for two of the most significant video games ever made, Unreal and Unreal Tournament, to be freely accessed via the Internet Archive. As spotted by RPS, via ResetEra, the OldUnreal group announced the move on their Discord, along with instructions for how to easily download and play them on modern machines.

Huge kudos to Epic for being cool with this, because while it shouldn’t be unusual to happily let people freely share a three-decade-old game you don’t sell any more, it’s vanishingly rare. And if you remain in any doubt, we just got word back from Epic confirming they’re on board.

“We can confirm that Unreal 1 and Unreal Tournament are available on archive.org,” a spokesperson told us by email, “and people are free to independently link to and play these versions.”

Importantly, OldUnreal and The Internet Archive very much know what they’re doing here. Grabbing the ZIP file for the game sleekly pulls the ISO directly from The Internet Archive, installs it, and there are instructions for how to get the game up and running on modern hardware. This is obviously a labor of love from fans dedicated toward keeping these two excellent games alive.

[…]

But this is just two games. What would be really nice to see is this become a trend, or, better yet, a program run by The Internet Archive. Don’t want to bother to preserve your old game? No problem, let the IA do it for you!

Source: Epic Allows Internet Archive To Distribute For Free ‘Unreal’ & ‘Unreal Tournament’ Forever | Techdirt

HarperCollins Confirms It Has a Deal to Bleed Authors to allow their Work to be used as training for AI Company

HarperCollins, one of the biggest publishers in the world, made a deal with an “artificial intelligence technology company” and is giving authors the option to opt in to the agreement or pass, 404 Media can confirm.

[…]

On Friday, author Daniel Kibblesmith, who wrote the children’s book Santa’s Husband and published it with HarperCollins, posted screenshots on Bluesky of an email he received, seemingly from his agent, informing him that the agency was approached by the publisher about the AI deal. “Let me know what you think, positive or negative, and we can handle the rest of this for you,” the screenshotted text in an email to Kibblesmith says. The screenshots show the agent telling Kibblesmith that HarperCollins was offering $2,500 (non-negotiable).

[…]

“You are receiving this memo because we have been informed by HarperCollins that they would like permission to include your book in an overall deal that they are making with a large tech company to use a broad swath of nonfiction books for the purpose of providing content for the training of an Al language learning model,” the screenshots say. “You are likely aware, as we all are, that there are controversies surrounding the use of copyrighted material in the training of Al models. Much of the controversy comes from the fact that many companies seem to be doing so without acknowledging or compensating the original creators. And of course there is concern that these Al models may one day make us all obsolete.”

“It seems like they think they’re cooked, and they’re chasing short money while they can. I disagree,” Kibblesmith told the AV Club. “The fear of robots replacing authors is a false binary. I see it as the beginning of two diverging markets, readers who want to connect with other humans across time and space, or readers who are satisfied with a customized on-demand content pellet fed to them by the big computer so they never have to be challenged again.”

Source: HarperCollins Confirms It Has a Deal to Sell Authors’ Work to AI Company

Now the copyright industry wants to apply deep, automated blocking to the Internet’s core routers

A central theme of Walled Culture the book (free digital versions available) and this blog is that the copyright industry is never satisfied. Now matter how long the term of copyright, publishers and recording companies want more. No matter how harsh the punishments for infringement, the copyright intermediaries want them to be even more severe.

Another manifestation of this insatiability is seen in the ever-widening use of Internet site blocking. What began as a highly-targeted one-off in the UK, when a court ordered the Newzbin2 site to be blocked, has become a favoured method of the copyright industry for cutting off access to thousands of sites around the world, including many blocked by mistake. Even more worryingly, the approach has led to blocks being implemented in some key parts of the Internet’s infrastructure that have no involvement with the material that flows through them: they are just a pipe. For example, last year we wrote about courts ordering the content delivery network Cloudflare to block sites. But even that isn’t enough it seems. A post on TorrentFreak reports on a move to embed site blocking at the very heart of the Internet. This emerges from an interview about the Brazilian telecoms regulator Anatel:

In an interview with Tele.Sintese, outgoing Anatel board member Artur Coimbra recalls the lack of internet infrastructure in Brazil as recently as 2010. As head of the National Broadband Plan under the Ministry of Communications, that’s something he personally addressed. For Anatel today, blocking access to pirate websites and preventing unauthorized devices from communicating online is all in a day’s work.

Here’s the key revelation spotted by TorrentFreak:

“The second step, which we still need to evaluate because some companies want it, and others are more hesitant, is to allow Anatel to have access to the core routers to place a direct order on the router,” Coimbra reveals, referencing IPTV [Internet Protocol television] blocking.

“In these cases, these companies do not need to have someone on call to receive the [blocking] order and then implement it.”

Later on, Coimbra clarifies how far along this plan is:

“Participation is voluntary. We are still testing with some companies. So, it will take some time until it actually happens,” Coimbra says. “I can’t say [how long]. Our inspection team is carrying out tests with some operators, I can’t say which ones.”

Even if this is still in the testing phase, and only with “some” companies, it’s a terrible precedent. It means that blocking – and thus censorship – can be applied automatically, possibly without judicial oversight, to some of the most fundamental parts of the Internet’s plumbing. Once that happens, it will spread, just as the original single site block in the UK has spread worldwide. There’s even a hint that might already be happening. Asked if such blocking is being applied anywhere else, Coimbra replies:

“I don’t know. Maybe in Spain and Portugal, which are more advanced countries in this fight. But I don’t have that information,” Coimbra responds, randomly naming two countries with which Brazil has consulted extensively on blocking matters.

Although it’s not clear from that whether Spain and Portugal are indeed taking this route, the fact that Coimbra suggests that they might be is deeply troubling. And even if they aren’t, we can be sure that the copyright industry will keep demanding Internet blocks and censorship at the deepest level until they get them.

Source: Now the copyright industry wants to apply deep, automated blocking to the Internet’s core routers – Walled Culture

Judge: Just Because AI Trains On Your Publication, Doesn’t Mean It Infringes On Your Copyright. Another case thrown out.

I get that a lot of people don’t like the big AI companies and how they scrape the web. But these copyright lawsuits being filed against them are absolute garbage. And you want that to be the case, because if it goes the other way, it will do real damage to the open web by further entrenching the largest companies. If you don’t like the AI companies find another path, because copyright is not the answer.

So far, we’ve seen that these cases aren’t doing all that well, though many are still ongoing.

Last week, a judge tossed out one of the early ones against OpenAI, brought by Raw Story and Alternet.

Part of the problem is that these lawsuits assume, incorrectly, that these AI services really are, as some people falsely call them, “plagiarism machines.” The assumption is that they’re just copying everything and then handing out snippets of it.

But that’s not how it works. It is much more akin to reading all these works and then being able to make suggestions based on an understanding of how similar things kinda look, though from memory, not from having access to the originals.

Some of this case focused on whether or not OpenAI removed copyright management information (CMI) from the works that they were being trained on. This always felt like an extreme long shot, and the court finds Raw Story’s arguments wholly unconvincing in part because they don’t show any work that OpenAI distributed without their copyright management info.

For one thing, Plaintiffs are wrong that Section 1202 “grant[ s] the copyright owner the sole prerogative to decide how future iterations of the work may differ from the version the owner published.” Other provisions of the Copyright Act afford such protections, see 17 U.S.C. § 106, but not Section 1202. Section 1202 protects copyright owners from specified interferences with the integrity of a work’s CMI. In other words, Defendants may, absent permission, reproduce or even create derivatives of Plaintiffs’ works-without incurring liability under Section 1202-as long as Defendants keep Plaintiffs’ CMI intact. Indeed, the legislative history of the DMCA indicates that the Act’s purpose was not to guard against property-based injury. Rather, it was to “ensure the integrity of the electronic marketplace by preventing fraud and misinformation,” and to bring the United States into compliance with its obligations to do so under the World Intellectual Property Organization (WIPO) Copyright Treaty, art. 12(1) (“Obligations concerning Rights Management Information”) and WIPO Performances and Phonograms Treaty….

Moreover, I am not convinced that the mere removal of identifying information from a copyrighted work-absent dissemination-has any historical or common-law analogue.

Then there’s the bigger point, which is that the judge, Colleen McMahon, has a better understanding of how ChatGPT works than the plaintiffs and notes that just because ChatGPT was trained on pretty much the entire internet, that doesn’t mean it’s going to infringe on Raw Story’s copyright:

Plaintiffs allege that ChatGPT has been trained on “a scrape of most of the internet,” Compl. , 29, which includes massive amounts of information from innumerable sources on almost any given subject. Plaintiffs have nowhere alleged that the information in their articles is copyrighted, nor could they do so. When a user inputs a question into ChatGPT, ChatGPT synthesizes the relevant information in its repository into an answer. Given the quantity of information contained in the repository, the likelihood that ChatGPT would output plagiarized content from one of Plaintiffs’ articles seems remote.

Finally, the judge basically says, “Look, I get it, you’re upset that ChatGPT read your stuff, but you don’t have an actual legal claim here.”

Let us be clear about what is really at stake here. The alleged injury for which Plaintiffs truly seek redress is not the exclusion of CMI from Defendants’ training sets, but rather Defendants’ use of Plaintiffs’ articles to develop ChatGPT without compensation to Plaintiffs. See Compl. ~ 57 (“The OpenAI Defendants have acknowledged that use of copyright-protected works to train ChatGPT requires a license to that content, and in some instances, have entered licensing agreements with large copyright owners … They are also in licensing talks with other copyright owners in the news industry, but have offered no compensation to Plaintiffs.”). Whether or not that type of injury satisfies the injury-in-fact requirement, it is not the type of harm that has been “elevated” by Section 1202(b )(i) of the DMCA. See Spokeo, 578 U.S. at 341 (Congress may “elevate to the status of legally cognizable injuries, de facto injuries that were previously inadequate in law.”). Whether there is another statute or legal theory that does elevate this type of harm remains to be seen. But that question is not before the Court today.

While the judge dismisses the case with prejudice and says they can try again, it would appear that she is skeptical they could do so with any reasonable chance of success:

In the event of dismissal Plaintiffs seek leave to file an amended complaint. I cannot ascertain whether amendment would be futile without seeing a proposed amended pleading. I am skeptical about Plaintiffs’ ability to allege a cognizable injury but, at least as to injunctive relief, I am prepared to consider an amended pleading.

I totally get why publishers are annoyed and why they keep suing. But copyright is the wrong tool for the job. Hopefully, more courts will make this clear and we can get past all of these lawsuits.

Source: Judge: Just Because AI Trains On Your Publication, Doesn’t Mean It Infringes On Your Copyright | Techdirt

The Open Source Project DeFlock Is Mapping License Plate Surveillance Cameras All Over the World

[…] Flock is one of the largest vendors of automated license plate readers (ALPRs) in the country. The company markets itself as having the goal to fully “eliminate crime” with the use of ALPRs and other connected surveillance cameras, a target experts say is impossible.

In Huntsville, Freeman noticed that license plate reader cameras were positioned in a circle at major intersections, forming a perimeter that could track any car going into or out of the city’s downtown. He started to look for cameras all over Huntsville and the surrounding areas, and soon found that Flock was not the only game in town. He found cameras owned by Motorola, and a third, owned by a company called Avigilon (a subsidiary of Motorola). Flock and automated license plate reader cameras owned by other companies are now in thousands of neighborhoods around the country. Many of these systems talk to each other and plug into other surveillance systems, making it possible to track people all over the country.

[…]

And so he made a map, and called it DeFlock. DeFlock runs on Open Street Map, an open source, editable mapping software. He began posting signs for DeFlock to the posts holding up Huntsville’s ALPR cameras, and made a post about the project to the Huntsville subreddit, which got good attention from people who lived there.

[…]

When I first talked to Freeman, DeFlock had a few dozen cameras mapped in Huntsville and a handful mapped in Southern California and in the Seattle suburbs. A week later, as I write this, DeFlock has crowdsourced the locations of thousands of cameras in dozens of cities across the United States and the world.

“It still just scratches the surface,” Freeman said. “I added another page to the site that tracks cities and counties who have transparency reports on Flock’s site, and many of those don’t have any reported ALPRs though, so it’ll help people focus on where to look for them.”

[…]

He said so far more than 1,700 cameras have been reported in the United States and more than 5,600 have been reported around the world. He has also begun scraping parts of Flock’s website to give people a better idea of where to look to map them. For example, Flock says that Colton, California, a city with just over 50,000 people outside of San Bernardino, has 677 cameras.

A ring of Flock cameras in Huntsville’s downtown, pointing outward.

People who submit cameras to DeFlock have the ability to note the direction that they are pointing in, which can help people understand how these cameras are being positioned and the strategies that companies and police departments are using when deploying them.

[…]

Freeman also said he eventually wants to find a way to offer navigation directions that will allow people to avoid known ALPR cameras. The fact that it is impossible to drive in some cities without being passing ALPR cameras that track and catalog your car’s movements is one of the core arguments in a Fourth Amendment challenge to Flock’s existence in Norfolk, Virginia; this project will likely show how infeasible traveling without being tracked actually is in America. Knowing where they are is the first step toward resisting them.

Source: The Open Source Project DeFlock Is Mapping License Plate Surveillance Cameras All Over the World

Singapore to increase road capacity by GPS tracking all vehicles. Because location data is not sensitive and will never be hacked *cough*

Singapore’s Land Transport Authority (LTA) estimated last week that by tracking all vehicles with GPS it will be able to increase road capacity by 20,000 over the next few years.

The densely populated island state is moving from what it calls Electric Road Pricing (ERP) 1.0 to ERP 2.0. The first version used gantries – or automatic tolls – to charge drivers a fee through an in-car device when they used specific roadways during certain hours.

ERP 2.0 sees the vehicle instead tracked through GPS, which can tell where a vehicle is at all operating times.

“ERP 2.0 will provide more comprehensive aggregated traffic information and will be able to operate without physical gantries. We will be able to introduce new ‘virtual gantries,’ which allow for more flexible and responsive congestion management,” explained the LTA.

But the island’s government doesn’t just control inflow into urban areas through toll-like charging – it also aggressively controls the total number of cars operating within its borders.

Singapore requires vehicle owners to bid for a set number of Certificates of Entitlement – costly operating permits valid for only ten years. The result is an increase of around SG$100,000 ($75,500) every ten years, depending on that year’s COE price, on top of a car’s usual price. The high total price disincentivizes mass car ownership, which helps the government manage traffic and emissions.

[…]

Source: Singapore to increase road capacity by GPS tracking vehicles • The Register

Washington Post and NYTimes suppressed by fascist Trump Through Billionaire Cowardice

Newspaper presidential endorsements may not actually matter that much, but billionaire media owners blocking editorial teams from publishing their endorsements out of concern over potential retaliation from a future Donald Trump presidency should matter a lot.

If people were legitimately worried about the “weaponization of government” and the idea that companies might silence speech over threats from the White House, what has happened over the past few days should raise alarm bells. But somehow I doubt we’ll be seeing the folks who were screaming bloody murder over the nothingburger that was the Murthy lawsuit saying a word of concern about billionaire media owners stifling the speech of their editorial boards to curry favor with Donald Trump.

In 2017, the Washington Post changed its official slogan to “Democracy Dies in Darkness.”

The phrase was apparently a favorite of Bob Woodward, who was one of the main reporters who broke the Watergate story decades ago. Lots of people criticized the slogan at the time (and have continued to do so since then), but no more so than today, as Jeff Bezos apparently stepped in to block the newspaper from endorsing Kamala Harris for President.

An endorsement of Harris had been drafted by Post editorial page staffers but had yet to be published, according to two people who were briefed on the sequence of events and who spoke on the condition of anonymity because they were not authorized to speak publicly. The decision to no longer publish presidential endorsements was made by The Post’s owner, Amazon founder Jeff Bezos, according to the same two people.

This comes just days after a similar situation with the LA Times, whose billionaire owner, Patrick Soon-Shiong, similarly blocked the editorial board from publishing its planned endorsement of Harris. Soon-Shiong tried to “clarify” by claiming he had asked the team to instead publish something looking at the pros and cons of each candidate. However, as members of the editorial board noted in response, that’s what you’d expect the newsroom to do. The editorial board is literally supposed to express its opinion.

In the wake of that decision, at least three members of the LA Times editorial board have resigned. Mariel Garza quit almost immediately, and Robert Greene and Karin Klein followed a day later. As of this writing, it appears at least one person, editor-at-large Robert Kagan, has resigned from the Washington Post.

Or, as the Missing The Point account on Bluesky noted, perhaps the Washington Post is changing its slogan to “Hello Darkness My Old Friend”:

Marty Baron, who had been the Executive Editor of the Washington Post when it chose “Democracy Dies in Darkness” as a slogan, called Bezos’ decision out as “cowardice” and warned that Trump would see this as a victory of his intimidation techniques, and it would embolden him:

The thing is, for all the talk over the past decade or so about “free speech” and “the weaponization of government,” this sure looks like these two billionaires suppressing speech from their organizations over fear of how Trump will react, should he be elected.

During his last term, Donald Trump famously targeted Amazon in retaliation for coverage he didn’t like from the Washington Post. His anger at WaPo coverage caused him to ask the Postmaster General to double Amazon’s postage rates. Trump also told his Secretary of Defense James Mattis to “screw Amazon” and to kill a $10 billion cloud computing deal the Pentagon had lined up.

For all the (misleading) talk about the Biden administration putting pressure on tech companies, what Trump did there seemed like legitimate First Amendment violations. He punished Amazon for speech he didn’t like. It’s funny how all the “weaponization of the government” people never made a peep about any of that.

As for Soon-Shiong, it’s been said that he angled for a cabinet-level “health care czar” position in the last Trump administration, so perhaps he’s hoping to increase his chances this time around.

In both cases, though, this sure looks like Trump’s past retaliations and direct promises of future retaliation against all who have challenged him are having a very clear censorial impact. In the last few months Trump has been pretty explicit that, should he win, he intends to punish media properties that reported on him in ways he dislikes. These are all reasons why anyone who believes in free speech should be speaking out about the dangers of Donald Trump towards our most cherished First Amendment rights.

Especially those in the media.

Bezos and Soon-Shiong are acting like cowards. Rather than standing up and doing what’s right, they’re pre-caving, before the election has even happened. It’s weak and pathetic, and Trump will see it (accurately) to mean that he can continue to walk all over them, and continue to get the media to pull punches by threatening retaliation.

If democracy dies in darkness, it’s because Bezos and Soon-Shiong helped turn off the light they were carrying.

Source: Democracy Dies In Darkness… Helped Along By Billionaire Cowardice | Techdirt

Feds Say You Don’t Have a Right to Check Out Retro Video Games Like Library Books. Want you to pirate them apparently.

Most of the world’s video games from close to 50 years of history are effectively, legally dead. A Video Games History Foundation study found you can’t buy nearly 90% of games from before 2010. Preservationists have been looking for ways to allow people to legally access gaming history, but the U.S. Copyright Office dealt them a heavy blow Friday. Feds declared that you or any researcher has no right to access old games under the Digital Millennium Copyright Act, or DMCA.

Groups like the VGHF and the Software Preservation Network have been putting their weight behind an exemption to the DMCA surrounding video game access. The law says that you can’t remotely access old, defunct games that are still under copyright without a license, even though they’re not available for purchase. Current rules in the DMCA restrict libraries and repositories of old games to one person at a time, in person.

The foundation’s proposed exemption would have allowed more than one person at a time to access the content stored in museums, archives, and libraries. This would allow players to access a piece of video game history like they would if they checked out an ebook from a library. The VGHF and SPN argued that if the museum has several copies of a game in its possession, then it should be able to allow as many people to access the game as there are copies available.

In the Copyright Office’s decision dated Oct. 18 (found on Page 30), Director Shira Perlmutter agreed with multiple industry groups, including the Entertainment Software Association. She recommended the Library of Congress keep the same restrictions. Section 1201 of the DMCA restricts “unauthorized” access to copyrighted works, including games. However, it allows the Library of Congress to allow some classes of people to circumvent those restrictions.

In a statement, the VGHF said lobbying efforts from rightsholders “continue to hold back progress.” The group pointed to comments from a representative from the ESA. An attorney for the ESA told Ars Technica, “I don’t think there is at the moment any combinations of limitations that ESA members would support to provide remote access.”

Video game preservationists said these game repositories could provide full-screen popups of copyright notices to anybody who checked out a game. They would also restrict access to a time limit or force users to access via “technological controls,” like a purpose-built distribution of streaming platforms.

Industry groups argued that those museums didn’t have “appropriate safeguards” to prevent users from distributing the games once they had them in hand. They also argued that there’s a “substantial market” for older or classic games, and a new, free library to access games would “jeopardize” this market. Perlmutter agreed with the industry groups.

“While the Register appreciates that proponents have suggested broad safeguards that could deter recreational uses of video games in some cases, she believes that such requirements are not specific enough to conclude that they would prevent market harms,” she wrote.

Do libraries that lend books hurt the literary industry? In many cases, publishers see libraries as free advertising for their products. It creates word of mouth, and since libraries only have a limited number of copies, those who want a book to read for longer are incentivized to purchase one. The video game industry is so effective at shooting itself in the foot that it doesn’t even recognize when third-party preservationists are actively about to help them for no cost on the publishers’ part.

If there is such a substantial market for classic games, why are so many still unavailable for purchase? Players will inevitably turn to piracy or emulation if there’s no easy-to-access way of playing older games.

“The game industry’s absolutist position… forces researchers to explore extra-legal methods to access the vast majority of out-of-print video games that are otherwise unavailable,” the VGHF wrote.

Source: Feds Say You Don’t Have a Right to Check Out Retro Video Games Like Library Books

Juicy Licensing Deals With AI Companies Show That Publishers Don’t Actually Care About Creators

One of the many interesting aspects of the current enthusiasm for generative AI is the way that it has electrified the formerly rather sleepy world of copyright. Where before publishers thought they had successfully locked down more or less everything digital with copyright, they now find themselves confronted with deep-pocketed companies – both established ones like Google and Microsoft, and newer ones like OpenAI – that want to overturn the previous norms of using copyright material. In particular, the latter group want to train their AI systems on huge quantities of text, images, videos and sounds.

As Walled Culture has reported, this has led to a spate of lawsuits from the copyright world, desperate to retain their control over digital material. They have framed this as an act of solidarity with the poor exploited creators. It’s a shrewd move, and one that seems to be gaining traction. Lots of writers and artists think they are being robbed of something by Big AI, even though that view is based on a misunderstanding of how generative AI works. However, in the light of stories like one in The Bookseller, they might want to reconsider their views about who exactly is being evil here:

Academic publisher Wiley has revealed it is set to make $44 million (£33 million) from Artificial Intelligence (AI) partnerships that it is not giving authors the opportunity to opt-out from.

As to whether authors would share in that bounty:

A spokesperson confirmed that Wiley authors are set to receive remuneration for the licensing of their work based on their “contractual terms”.

That might mean they get nothing, if there is no explicit clause in their contract about sharing AI licensing income. For example, here’s what is happening with the publisher Taylor & Francis:

In July, authors hit out another academic publisher, Taylor & Francis, the parent company of Routledge, over an AI deal with Microsoft worth $10 million, claiming they were not given the opportunity to opt out and are receiving no extra payment for the use of their research by the tech company. T&F later confirmed it was set to make $75 million from two AI partnership deals.

It’s not just in the world of academic publishing that deals are being struck. Back in July, Forbes reported on a “flurry of AI licensing activity”:

The most active area for individual deals right now by far—judging from publicly known deals—is news and journalism. Over the past year, organizations including Vox Media (parent of New York magazine, The Verge, and Eater), News Corp (Wall Street Journal, New York Post, The Times (London)), Dotdash Meredith (People, Entertainment Weekly, InStyle), Time, The Atlantic, Financial Times, and European giants such as Le Monde of France, Axel Springer of Germany, and Prisa Media of Spain have each made licensing deals with OpenAI.

In the absence of any public promises to pass on some of the money these licensing deals will bring, it is not unreasonable to assume that journalists won’t be seeing much if any of it, just as they aren’t seeing much from the link tax.

The increasing number of such licensing deals between publishers and AI companies shows that the former aren’t really too worried about the latter ingesting huge quantities of material for training their AI systems, provided they get paid. And the fact that there is no sign of this money being passed on in its entirety to the people who actually created that material, also confirms that publishers don’t really care about creators. In other words, it’s pretty much what was the status quo before generative AI came along. For doing nothing, the intermediaries are extracting money from the digital giants by invoking the creators and their copyrights. Those creators do all the work, but once again see little to no benefit from the deals that are being signed behind closed doors.

Source: Juicy Licensing Deals With AI Companies Show That Publishers Don’t Actually Care About Creators | Techdirt

Google changes Terms Of Service, now spies on your AI prompts

The new terms come in on November 15th.

4.3 Generative AI Safety and Abuse. Google uses automated safety tools to detect abuse of Generative AI Services. Notwithstanding the “Handling of Prompts and Generated Output” section in the Service Specific Terms, if these tools detect potential abuse or violations of Google’s AUP or Prohibited Use Policy, Google may log Customer prompts solely for the purpose of reviewing and determining whether a violation has occurred. See the Abuse Monitoring documentation page for more information about how logging prompts impacts Customer’s use of the Services.

Source: Google Cloud Platform Terms Of Service

Both uBlock Origin and Lite face browser problems

Both uBlock Origin and its smaller sibling, uBlock Origin Lite, are experiencing problems thanks to browser vendors that really ought to know better.

Developer Raymond Hill, or gorhill on GitHub, is one of the biggest unsung heroes of the modern web. He’s the man behind two of the leading browser extensions to block unwanted advertising, the classic uBlock Origin and its smaller, simpler relation, uBlock Origin Lite. They both do the same job in significantly different ways, so depending on your preferred browser, you now must make a choice.

Gorhill reports on GitHub that an automated code review by Mozilla flagged problems with uBlock Origin Lite. As a result, he has pulled the add-on from Mozilla’s extensions site. The extension’s former page now just says “Oops! We can’t find that page”. You can still install it direct from GitHub, though.

The good news is that the full-fat version, uBlock Origin, is still there, so you can choose that. Hill has a detailed explanation of why and how uBlock Origin works best on Firefox. It’s a snag, though, if like The Reg FOSS desk you habitually run both Firefox and Chrome and wanted to keep both on the same ad blocker.

That’s because, as The Register warned back in August, Google’s new Manifest V3 extensions system means the removal of Manifest V2 – upon which uBlock Origin depends. For now, it still works – this vulture is running Chrome version 130 and uBO is still functioning. It’s still available on Google’s web extensions store, with a slightly misleading warning:

This extension may soon no longer be supported because it doesn’t follow best practices for Chrome extensions.

So, if you use Chrome, or a Chrome-based browser – which is most of them – then you will soon be compelled to remove uBO and switch to uBlock Origin Lite instead.

It would surely be overly cynical of us to suggest that issues with ad blockers were a foreseeable difficulty now that Mozilla is an advertising company.

To sum up, if you have a Mozilla-family browser, uBlock Origin is the easier option. If you have a Chrome-family browser, such as Microsoft Edge, then, very soon, uBlock Origin Lite will be the only version available to you.

There are other in-browser ad-blocking options out there, of course.

Linux users may well want to consider having Privoxy running in the background as well. For example, on Ubuntu and Debian-family distros, just type sudo apt install -y privoxy and reboot. If you run your own home network, maybe look into configuring an old Raspberry Pi with Pi-hole.

uBlock Origin started out as a fork of uBlock, which is now owned by the developers of AdBlock – which means that, as The Register said in 2021, it is “made by an advertising company that brokers ‘acceptable ads.'”

If acceptable ads don’t sound so bad – and to be fair, they’re better than the full Times-Square-neon-infested experience of much of the modern web – then you can still install the free AdBlock Plus, which is in both the Mozilla’s store and in the Chrome store.

Source: Both uBlock Origin and Lite face browser problems • The Register

German court: LAION’s generative AI training dataset is legal thanks to EU copyright exceptions

The copyright world is currently trying to assert its control over the new world of generative AI through a number of lawsuits, several of which have been discussed previously on Walled Culture. We now have our first decision in this area, from the regional court in Hamburg. Andres Guadamuz has provided an excellent detailed analysis of a ruling that is important for the German judges’ discussion of how EU copyright law applies to various aspects of generative AI. The case concerns the freely-available dataset from LAION (Large-scale Artificial Intelligence Open Network), a German non-profit. As the LAION FAQ says: “LAION datasets are simply indexes to the internet, i.e. lists of URLs to the original images together with the ALT texts found linked to those images.” Guadamuz explains:

The case was brought by German photographer Robert Kneschke, who found that some of his photographs had been included in the LAION dataset. He requested the images to be removed, but LAION argued that they had no images, only links to where the images could be found online. Kneschke argued that the process of collecting the dataset had included making copies of the images to extract information, and that this amounted to copyright infringement.

LAION admitted making copies, but said that it was in compliance with the exception for text and data mining (TDM) present in German law, which is a transposition of Article 3 of the 2019 EU Copyright Directive. The German judges agreed:

The court argued that while LAION had been used by commercial organisations, the dataset itself had been released to the public free of charge, and no evidence was presented that any commercial body had control over its operations. Therefore, the dataset is non-commercial and for scientific research. So LAION’s actions are covered by section 60d of the German Copyright Act

That’s good news for LAION and its dataset, but perhaps more interesting for the general field of generative AI is the court’s discussion of how the EU Copyright Directive and its exceptions apply to AI training. It’s a key question because copyright companies claim that they don’t, and that when such training involves copyright material, permission is needed to use it. Guadamuz summarises that point of view as follows:

the argument is that the legislators didn’t intend to cover generative AI when they passed the [EU Copyright Directive], so text and data mining does not cover the training of a model, just the making of a copy to extract information from it. The argument is that making a copy to extract information to create a dataset is fine, as the court agreed here, but the making of a copy in order to extract information to make a model is not. I somehow think that this completely misses the way in which a model is trained; a dataset can have copies of a work, or in the case of LAION, links to the copies of the work. A trained model doesn’t contain copies of the works with which it was trained, and regurgitation of works in the training data in an output is another legal issue entirely.

The judgment from the Hamburg court says that while legislators may not have been aware of generative AI model training in 2019, when they drew up the EU Copyright Directive, they certainly are now. The judges use the EU’s 2024 AI Act as evidence of this, citing a paragraph that makes explicit reference to AI models complying with the text and data mining regulation in the earlier Copyright Directive.

As Guadamuz writes in his post, this is an important point, but the legal impact may be limited. The judgment is only the view of a local German court, so other jurisdictions may produce different results. Moreover, the original plaintiff Robert Kneschke may appeal and overturn the decision. Furthermore, the ruling only concerns the use of text and data mining to create a training dataset, not the actual training itself, although the judges’ thoughts on the latter indicate that it would be legal too. In other words, this local outbreak of good sense in Germany is welcome, but we are still a long way from complete legal clarity on the training of generative AI systems on copyright material.

Source: German court: LAION’s generative AI training dataset is legal thanks to EU copyright exceptions – Walled Culture