OpenAI Threatens Popular GitHub Project With Lawsuit Over API Use

Anyone can use ChatGPT for free, but if you want to use GPT4, the latest language model, you have to either pay for ChatGPT Plus, pay for access to OpenAI’s API, or find another site that has incorporated GPT4 into its own free chatbot. There are sites that use OpenAI such as Forefront (opens in new tab) and You.com (opens in new tab), but what if you want to make your own bot and don’t want to pay for the API?

A GitHub project called GPT4free (opens in new tab) allows you to get free access to the GPT4 and GPT3.5 models by funneling those queries through sites like You.com (opens in new tab), Quora (opens in new tab) and CoCalc (opens in new tab) and giving you back the answers. The project is GitHub’s most popular new repo, getting 14,000 stars this week.

Now, according to Xtekky, the European computer science student who runs the repo, OpenAI has sent a letter demanding that he take the whole thing down within five days or face a lawsuit.

I interviewed Xtekky via Telegram, and he said he doesn’t think OpenAI should be targeting him since he isn’t connecting directly to the company’s API, but is instead getting data from other sites that are paying for their own API licenses. If the owners of those sites have a problem with his scripts querying them, they should approach him directly, he posited.

[…]

On the backend, GPT4Free is visiting various API urls that sites like You.com, an AI-powered search engine that employs OpenAI’s GPT3.5 model for its answers, use for their own queries. For example, the main GPT4Free script hits the URL https://you.com/api/streamingSearch, feeds it various parameters, and then takes the JSON it returns and formats it. The GPT4Free repo also has scripts that grab data from other sites such as Quora, Forefront, and TheB. Any enterprising developer could use these simple scripts to make their own bot.

“One could achieve the same [thing by] just opening tabs of the sites. I can open tabs of Phind, You, etc. on my browser and spam requests,” Xtekky said. “My repo just does it in a simpler way.”

All of the sites GPT4Free draws from are paying OpenAI fees in order to use its large language models. So when you use the scripts, those sites end up footing the bill for your queries, without you ever visiting them. If those sites are relying on ad revenue from their sites to offset these API costs, they are losing money because of these queries.

Xtekky said that he is more than happy to take down scripts that use individual sites’ APIs upon request from the owners of those sites. He said that he has already taken down scripts that use phind.com, ora.sh and writesonic.com.

Perhaps more importantly, Xtekky noted, any of these sites could block external uses of their internal APIs with common security measures. One of many methods that sites like You.com could use is to block API traffic from any IPs that are not their own.

Xtekky said that he has advised all the sites that wrote to him that they should secure their APIs, but none of them has done so. So, even if he takes the scripts down from his repo, any other developer could do the same thing.

[…]

Xtekky initially told me that he hadn’t decided whether to take the repo down or not. However, several hours after this story first published, we chatted again and he told me that he plans to keep the repo up and to tell OpenAI that, if they want it taken down, they should file a formal request with GitHub instead of with him.

“I believe they contacted me before to pressurize me into deleting the repo myself,” he said. “But the right way should be an actual official DMCA, through GitHub.”

Even if the original repo is taken down, there’s a great chance that the code — and this method of accessing GPT4 and GPT3.5 — will be published elsewhere by members of the community. Even if GPT4Free had never existed anyone can find ways to use these sites’ APIs if they continue to be unsecured.

“Users are sharing and hosting this project everywhere,” he said. “Deletion of my repo will be insignificant.”

[…]

Source: OpenAI Threatens Popular GitHub Project With Lawsuit Over API Use | Tom’s Hardware

Bungie Somehow Wins $12 Million In Destiny 2 Anti-Cheat Lawsuit

As Bungie continues on its warpath against Destiny 2 cheaters, the studio has won $12 million in the lawsuit against Romanian cheat seller Mihai Claudiu-Florentin that began back in 2021.

Claudiu-Florentin sold cheat software at VeteranCheats, which allowed users to get an edge over other players with software that could do things like tweak their aim and let them see through walls. Naturally, Bungie argued that the software was damaging to Destiny 2‘s competitive and cooperative modes, and has won the case against the seller. The lawsuit alleges “copyright infringement, violations of the Digital Millennium Copyright Act (DMCA), breach of contract, intentional interference with contractual relations, and violations of the Washington Consumer Protection Act.” (Thanks, TheGamePost).

You can read a full PDF of the suit, courtesy of TheGamePost, here, but the gist of it is that Bungie is asking for $12,059,912.98 in total damages, with $11,696,000 going toward violations of the DMCA, $146,662.28 for violations of the Copyright Act, and $217,250.70 accounting for the studio’s attorney expense. After subpoenaing Stripe, a payment processing service, Bungie learned that at least 5848 separate transactions took place through the service that included Destiny 2 cheating software from November 2020 to July 2022.

While Bungie might have $12 million more dollars out of this, VeteranCheats’ website is still up and offering cheating software for games like Overwatch and Call of Duty. Though, Destiny no longer appears on the site’s home page or if you search within its community.

According to the lawsuit, Bungie has paid around $2 million in its anti-cheating efforts between staffing and software. This also extended to a blanket ban on cheating devices in both competitive and PvE modes earlier this month.

While Destiny 2 has been wrapped up in legal issues, the shooter has also been caught up in some other controversy recently thanks to a major leak that led to the ban of a major content creator in the game’s community.

Source: Bungie Wins $12 Million In Destiny 2 Anti-Cheat Lawsuit

Despite personally not liking online players cheating, it beggars belief that someone selling software is not allowed to create software which edits memory registers. You are the owner of what is on your computer, despite anything that software publishers put in their unreadable terms. You can modify anything on there however you like.

Apple App Store Policies Upheld by Court in Epic Games Antitrust Challenge – Apple can continue monopoly and massive 30% charges in app store USA (but not in EU)

Apple Inc. won an appeals court ruling upholding its App Store’s policies in an antitrust challenge brought by Epic Games Inc.

Monday’s ruling by the US Ninth Circuit Court of Appeals affirmed a lower-court judge’s 2021 decision largely rejecting claims by Epic, the maker of Fortnite, that Apple’s online marketplace policies violated federal law because they ban third-party app marketplaces on its operating system. The appeals panel upheld the judge’s ruling in Epic’s favor on California state law claims.

The ruling comes as Apple has been making changes to the way the App Store operates to address developer concerns since Epic sued the company in 2020. The dispute began after Apple expelled the Fortnite game from the App Store because Epic created a workaround to paying a 30% fee on customers’ in-app purchases.

“There is a lively and important debate about the role played in our economy and democracy by online transaction platforms with market power,” the three-judge panel said. “Our job as a federal court of appeals, however, is not to resolve that debate — nor could we even attempt to do so. Instead, in this decision, we faithfully applied existing precedent to the facts.”

Apple hailed the outcome as a “resounding victory,” saying nine out of 10 claims were decided in its favor.

[…]

Epic Chief Executive Officer Tim Sweeney tweeted that although Apple prevailed, at least the appeals court kept intact the portion of the 2021 ruling that sided with Epic.

“Fortunately, the court’s positive decision rejecting Apple’s anti-steering provisions frees iOS developers to send consumers to the web to do business with them directly there. We’re working on next steps,” he wrote.

[…]

Following a three-week trial in Oakland, California, Rogers ordered the technology giant to allow developers of mobile applications steer consumers to outside payment methods, granting an injunction sought by Epic. The judge, however, didn’t see the need for third-party app stores or to push Apple to revamp policies over app developer fees.

[…]

US and European authorities have taken steps to rein in Apple’s stronghold over the mobile market. In response to the Digital Markets Act — a new series of laws in the European Union — Apple is planning to allow outside apps as early as next year as part of an update to the upcoming iOS 17 software update, Bloomberg News has reported.

[…]

Source: Apple App Store Policies Upheld by Court in Epic Games Antitrust Challenge – Bloomberg

It’s a pretty sad day when an antitrust court runs away from calling a monopoly a monopoly

ICANN and Verisign Proposal Would Allow Any Government In The World To Seize Domain Names with no redress

ICANN, the organization that regulates global domain name policy, and Verisign, the abusive monopolist that operates the .COM and .NET top-level domains, have quietly proposed enormous changes to global domain name policy in their recently published “Proposed Renewal of the Registry Agreement for .NET”, which is now open for public comment.

Either by design, or unintentionally, they’ve proposed allowing any government in the world to cancel, redirect, or transfer to their control applicable domain names! This is an outrageous and dangerous proposal that must be stopped. […]

The offending text can be found buried in an Appendix of the proposed new registry agreement. […] the critical changes can be found in Section 2.7 of Appendix 8, on pages 147-148. (the blue text represents new language) Below is a screenshot of that section:

Proposed Changes in Appendix 8 of the .NET agreement
Proposed Changes in Appendix 8 of the .NET agreement

Section 2.7(b)(i) is new and problematic on its own [editor bold!] (and I’ll analyze that in more detail in a future blog post – there are other things wrong with this proposed agreement, but I’m starting off with the worst aspect). However, carefully examine the new text in Section 2.7(b)(ii) on page 148 of the redline document.

It would allow Verisign, via the new text in 2.7(b)(ii)(5), to:

deny, cancel, redirect or transfer any registration or transaction, or place any domain name(s) on registry lock, hold or similar status, as it deems necessary, in its unlimited and sole discretion” [the language at the beginning of 2.7(b)(ii), emphasis added]

Then it lists when it can take the above measures. The first 3 are non-controversial (and already exist, as they’re not in blue text). The 4th is new, relating to security, and might be abused by Verisign. But, look at the 5th item! I was shocked to see this new language:

“(5) to ensure compliance with applicable law, government rules or regulations, or pursuant to any legal order or subpoena of any government, administrative or governmental authority, or court of competent jurisdiction,” [emphasis added]

This text has a plain and simple meaning — they propose  to allow “any government“, “any administrative authority”  and “any government authority” and “court[s] of competent jurisdiction” to deny, cancel, redirect, or transfer any domain name registration […].

You don’t have to be ICANN’s fiercest critic to see that this is arguably the most dangerous language ever inserted into an ICANN agreement.

“Any government” means what it says, so that means China, Russia, Iran, Turkey,  the Pitcairn Islands, Tuvalu, the State of Texas, the State of California, the City of Detroit,  a village of 100 people with a local council in Botswana, or literally “any government” whether it be state, local, or national. We’re talking about countless numbers of “governments” in the world (you’d have to add up all the cities, towns, states, provinces and nations, for starers). If that wasn’t bad enough, their proposal adds “any administrative authority” and “any government authority” (i.e.  government bureaucrats in any jurisdiction in the world) that would be empowered to “deny, cancel, redirect or transfer” domain names.  [The new text about “court of competent jurisdiction” is also probematic, as it would  override determinations that would be made by registrars via the agreements that domain name registrants have with their registrars.]

This proposal represents a complete government takeover of domain names, with no due process protections for registrants. It would usurp the role of registrars, making governments go directly to Verisign (or any other registry that adopts similar language) to achieve anything they desired. It literally overturns more than two decades of global domain name policy.

[…]

they bury major policy changes in an appendix near the end of a document that is over 100 pages long (133 pages long for the “clean” version of the document; 181 pages for the “redline” version)

[…]

ICANN and Verisign appear to have deliberately timed the comment period to avoid public scrutiny.  The public comment period opened on April 13, 2023, and is scheduled to end (currently) on May 25, 2023. However, the ICANN76 public meeting was held between March 11 and March 16, 2023, and the ICANN77 public meeting will be held between June 12 and June 15, 2023. Thus, they published the proposal only after the ICANN76 public meeting had ended (where we could have asked ICANN staff and the board questions about the proposal), and seek to end the public comment period before ICANN77 begins. This is likely not by chance, but by design.

[…]

What can you do? You can submit a public comment, showing your opposition to the changes, and/or asking for more time to analyze the proposal. [there are other things wrong with the proposed agreement, e.g. all of Appendix 11 (which takes language from new gTLD agreements, which are entirely different from legacy gTLDs like .com/net/org); section 2.14 of Appendix 8 further protects Verisign, via the new language (page 151 of the redline document); section 6.3 of Appendix 8, on page 158 of the redline, seeks to protect Verisign from losing the contract in the event of a cyberattack that disrupts operations — however, we are already paying above market rates for .net (and .com) domain names, arguably because Verisign tells others that they have high expenses in order to keep 100% uptime even in the face of attacks; this new language allows them to degrade service, with no reduction in fees)

[…]

Update #1: I’ve submitted a “placeholder” comment to ICANN, to get the ball rolling.  There’s also a thread on NamePros.com about this topic, if you had questions, etc.

Update #2: DomainIncite points out correctly that the offending language is already in the .com agreement, and that people weren’t paying attention to this issue back 3 years ago, as there bigger fish to fry. I went back and reviewed my own comment submission, and see that I did raise the issue back then too:

[…]

Source: Red Alert: ICANN and Verisign Proposal Would Allow Any Government In The World To Seize Domain Names – FreeSpeech.com

AI-generated Drake and The Weeknd song pulled from streaming platforms

If you spent almost any time on the internet this week, you probably saw a lot of chatter about “Heart on My Sleeve.” The song went viral for featuring AI-generated voices that do a pretty good job of mimicking Drake and The Weeknd singing about a recent breakup.

On Monday, Apple Music and Spotify pulled the track following a complaint from Universal Music Group, the label that represents the real-life versions of the two Toronto-born artists. A day later, YouTube, Amazon, SoundCloud, Tidal, Deezer and TikTok did the same.

At least, they tried to comply with the complaint, but as is always the case with the internet, you can still find the song on websites like YouTube. Before it was removed from Spotify, “Heart on My Sleeve” was a bonafide hit. People streamed the track more than 600,000 times. On TikTok, where the creator of the song, the aptly named Ghostwriter977, first uploaded it, users listened to “Heart on My Sleeve” more than 15 million times.

In a statement Universal Music Group shared with publications like Music Business Worldwide, the label argued the training of a generative AI using the voices of Drake and The Weeknd was “a breach of our agreements and a violation of copyright law.” The company added that streaming platforms had a “legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

It’s fair to say the music industry, much like the rest of society, now finds itself at an inflection point over the use of AI. While there are obvious ethical issues related to the creation of “Heart on My Sleeve,” it’s unclear if it’s a violation of traditional copyright law. In March, the US Copyright Office said art, including music, cannot be copyrighted if it was produced by providing a text prompt to a generative AI model. However, the office left the door open to granting copyright protections to works with AI-generated elements.

“The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work,” it said. “This is necessarily a case-by-case inquiry. If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it.” In the case of “Heart on My Sleeve,” complicating matters is that the song was written by a human being. It’s impossible to say how a court challenge would play out. What is clear is that we’re only the start of a very long discussion about the role of AI in music.

Source: AI-generated Drake and The Weeknd song pulled from streaming platforms | Engadget

Streaming Services Urged To Clamp Down on AI-Generated Music by Record Labels

Universal Music Group has told streaming platforms, including Spotify and Apple, to block artificial intelligence services from scraping melodies and lyrics from their copyrighted songs, according to emails viewed by the Financial Times. From the report: UMG, which controls about a third of the global music market, has become increasingly concerned about AI bots using their songs to train themselves to churn out music that sounds like popular artists. AI-generated songs have been popping up on streaming services and UMG has been sending takedown requests “left and right,” said a person familiar with the matter. The company is asking streaming companies to cut off access to their music catalogue for developers using it to train AI technology. “We will not hesitate to take steps to protect our rights and those of our artists,” UMG wrote to online platforms in March, in emails viewed by the FT. “This next generation of technology poses significant issues,” said a person close to the situation. “Much of [generative AI] is trained on popular music. You could say: compose a song that has the lyrics to be like Taylor Swift, but the vocals to be in the style of Bruno Mars, but I want the theme to be more Harry Styles. The output you get is due to the fact the AI has been trained on those artists’ intellectual property.”

Source: Streaming Services Urged To Clamp Down on AI-Generated Music – Slashdot

Basically they don’t want AI’s listening to their music as an inspiration for them to make music. Which is exactly what humans do. So I’m very curious what legal basis would accept their takedowns.

Google Will Require Android Apps to Make Account Deletion Easier

Right now, developers simply need to declare to Google that account deletion is somehow possible, but beginning next year, developers will have to make it easier to delete data through both their app and an online portal. Google specifies:

For apps that enable app account creation, developers will soon need to provide an option to initiate account and data deletion from within the app and online.

This means any app that lets you create an account to use it is required to allow you to delete that information when you’re done with it (or rather, request the developer delete the data from their servers). Although you can request that your data be deleted now, it usually requires manually contacting the developer to remove it. This new policy would mean developers have to offer a kill switch from the get-go rather than having Android users do the leg work.

The web deletion requirement is particularly new and must be “readily discoverable.” Developers must provide a link to a web form from the app’s Play Store landing page, with the idea being to let users delete account data even if they no longer have the app installed. Per the existing Android developer policy, all apps must declare how they collect and handle user data—Google introduced the policy in 2021 and made it mandatory last year. When you go into the Play Store and expand the “Data Safety” section under each app listing, developers list out data collection by criteria.

Simply removing an app from your Android device doesn’t completely scrub your data. Like software on a desktop operating system, files and folders are sometimes left behind from when the app was operating. This new policy will hopefully help you keep your data secure by wiping any unnecessary account info from the app developer’s servers, but also hopes to cut down on straggling data on your device. Conversely, you don’t have to delete your data if you think you’ll come to the app later. When it says you have a “choice,” Google wants to ensure it can point to something obvious.

It’s unclear how Google will determine if a developer follows the rules. It is up to the app developer to disclose whether user-specific app data is actually deleted. Earlier this year, Mozilla called out Google after discovering significant discrepancies between the top 20 most popular free apps’ internal privacy policies and those they listed in the Play Store.

https://gizmodo.com/google-android-delete-account-apps-request-uninstall-1850304540

Tesla Employees Have Been Meme-ing Your Private Car Videos

“We could see inside people’s garages and their private properties,” a former employee told Reuters. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.”

One office in particular, located in San Mateo, reportedly had a “free-wheeling” atmosphere, where employees would share videos and images with wild abandon. These pics or vids would often be “marked-up” via Adobe photoshop, former employees said, converting drivers’ personal experiences into memes that would circulate throughout the office.

“The people who buy the car, I don’t think they know that their privacy is, like, not respected,” one former employee was quoted as saying. “We could see them doing laundry and really intimate things. We could see their kids.”

Another former employee seemed to admit that all of this was very uncool: “It was a breach of privacy, to be honest. And I always joked that I would never buy a Tesla after seeing how they treated some of these people,” the employee told the news outlet. Yes, it’s always a vote of confidence when a company’s own employees won’t use the products that they sell.

Privacy concerns related to Tesla’s data-guzzling autos aren’t exactly new. Back in 2021, the Chinese government formally banned the vehicles on the premises of certain military installations, calling the company a “national security” threat. The Chinese were worried that the cars’ sensors and cameras could be used to funnel data out of China and back to the U.S. for the purposes of espionage. Beijing seems to have been on to something—although it might be the case that the spying threat comes less from America’s spooks than it does from bored slackers back at Tesla HQ.

One of the reasons that Tesla’s cameras seem so creepy is that you can never really tell if they’re on or not. A couple of years ago, a stationary Tesla helped catch a suspect in a Massachusetts hate crime, when its security system captured images of the man slashing tires in the parking lot of a predominantly Black church. The man was later arrested on the basis of the photos.

Reuters notes that it wasn’t ultimately “able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was.”

With all this in mind, you might as well always assume that your Tesla is watching, right? And, now that Reuters’ story has come out, you should also probably assume that some bored coder is also watchingpotentially in the hopes of converting your dopiest in-car moment into a meme.

https://gizmodo.com/tesla-elon-musk-car-camera-videos-employees-watching-1850307575

Wow, who knew? How surprising… not.

Tesla workers shared and memed sensitive images recorded by customer cars

Private camera recordings, captured by cars, were shared in chat rooms: ex-workers
Circulated clips included one of child being hit by car: ex-employees
Tesla says recordings made by vehicle cameras ‘remain anonymous’
One video showed submersible vehicle from James Bond film, owned by Elon Musk


LONDON/SAN FRANCISCO, April 6 (Reuters) – Tesla Inc assures its millions of electric car owners that their privacy “is and will always be enormously important to us.” The cameras it builds into vehicles to assist driving, it notes on its website, are “designed from the ground up to protect your privacy.”

But between 2019 and 2022, groups of Tesla employees privately shared via an internal messaging system sometimes highly invasive videos and images recorded by customers’ car cameras, according to interviews by Reuters with nine former employees.

Some of the recordings caught Tesla customers in embarrassing situations. One ex-employee described a video of a man approaching a vehicle completely naked.

Also shared: crashes and road-rage incidents. One crash video in 2021 showed a Tesla driving at high speed in a residential area hitting a child riding a bike, according to another ex-employee. The child flew in one direction, the bike in another. The video spread around a Tesla office in San Mateo, California, via private one-on-one chats, “like wildfire,” the ex-employee said.

Other images were more mundane, such as pictures of dogs and funny road signs that employees made into memes by embellishing them with amusing captions or commentary, before posting them in private group chats. While some postings were only shared between two employees, others could be seen by scores of them, according to several ex-employees.

Tesla states in its online “Customer Privacy Notice” that its “camera recordings remain anonymous and are not linked to you or your vehicle.” But seven former employees told Reuters the computer program they used at work could show the location of recordings – which potentially could reveal where a Tesla owner lived.

One ex-employee also said that some recordings appeared to have been made when cars were parked and turned off. Several years ago, Tesla would receive video recordings from its vehicles even when they were off, if owners gave consent. It has since stopped doing so.

“We could see inside people’s garages and their private properties,” said another former employee. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.”

Tesla didn’t respond to detailed questions sent to the company for this report.

About three years ago, some employees stumbled upon and shared a video of a unique submersible vehicle parked inside a garage, according to two people who viewed it. Nicknamed “Wet Nellie,” the white Lotus Esprit sub had been featured in the 1977 James Bond film, “The Spy Who Loved Me.”

The vehicle’s owner: Tesla Chief Executive Elon Musk, who had bought it for about $968,000 at an auction in 2013. It is not clear whether Musk was aware of the video or that it had been shared.

The submersible Lotus vehicle nicknamed “Wet Nellie” that featured in the 1977 James Bond film, “The Spy Who Loved Me,” and which Tesla chief executive Elon Musk purchased in 2013. Tim Scott ©2013 Courtesy of RM Sotheby’s
The submersible Lotus vehicle nicknamed “Wet Nellie” that featured in the 1977 James Bond film, “The Spy Who Loved Me,” and which Tesla chief executive Elon Musk purchased in 2013. Tim Scott ©2013 Courtesy of RM Sotheby’s
Musk didn’t respond to a request for comment.

To report this story, Reuters contacted more than 300 former Tesla employees who had worked at the company over the past nine years and were involved in developing its self-driving system. More than a dozen agreed to answer questions, all speaking on condition of anonymity.

Reuters wasn’t able to obtain any of the shared videos or images, which ex-employees said they hadn’t kept. The news agency also wasn’t able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was. Some former employees contacted said the only sharing they observed was for legitimate work purposes, such as seeking assistance from colleagues or supervisors.

https://www.reuters.com/technology/tesla-workers-shared-sensitive-images-recorded-by-customer-cars-2023-04-06/

ICE Is Grabbing Data From Schools, Abortion Clinics and news orgs with no judicial oversight

US Immigration and Customs Enforcement agents are using an obscure legal tool to demand data from elementary schools, news organizations, and abortion clinics in ways that, some experts say, may be illegal.

While these administrative subpoenas, known as 1509 custom summonses, are meant to be used only in criminal investigations about illegal imports or unpaid customs duties, WIRED found that the agency has deployed them to seek records that seemingly have little or nothing to do with customs violations, according to legal experts and several recipients of the 1509 summonses.

A WIRED analysis of an Immigration and Customs Enforcement (ICE) subpoena tracking database, obtained through a Freedom of Information Act request, found that agents issued custom summons more than 170,000 times from the beginning of 2016 through mid-August 2022. The primary recipients of 1509s include telecommunications companies, major tech firms, money transfer services, airlines, and even utility companies. But it’s the edge cases that have drawn the most concern among legal experts,

The outlier cases include custom summonses that sought records from a youth soccer league in Texas; surveillance video from a major abortion provider in Illinois; student records from an elementary school in Georgia; health records from a major state university’s student health services; data from three boards of elections or election departments; and data from a Lutheran organization that provides refugees with humanitarian and housing support.

In at least two instances, agents at ICE used the custom summons to pressure news organizations to reveal information about their sources.

All of this is done without judicial oversight.

[…]

The 1509 customs summons is an administrative subpoena explicitly and exclusively meant for use in investigations of illegal imports or unpaid customs duties under a law known as Title 19 US Code 1509. Its goal is to provide agencies like ICE with a way to obtain business records from companies without having to go to a judge for a warrant.

[…]

Without access to the underlying subpoenas ICE issued in each use of a 1509, it’s difficult to know exactly why companies in the database were issued customs summonses. However, nearly everyone we spoke to was concerned about the types of organizations that received these summonses. Our investigation found that ICE issued scores of customs summonses to hospitals and hundreds to elementary schools, high schools, and universities. “It’s disturbing,” Mao says. “I really can’t imagine how a student or a health record could possibly be relevant to a permissible customs investigation under the law.”

To figure out if these summonses were issued for customs investigations, we contacted 30 organizations that received them. Most did not respond, and many who did refused to speak on the record for fear of retaliation.

[…]

In March last year, US senator Ron Wyden, an Oregon Democrat who chairs the Senate Finance Committee, revealed that ICE had been using 1509 customs summonses to obtain millions of money transfer records, which were added to a database that was shared with hundreds of law enforcement agencies across the country. According to the American Civil Liberties Union (ACLU), it was one of the largest government surveillance programs in recent memory.

Immediately after Wyden’s investigation, the number of customs summons issued by ICE fell from 3,683 in March 2022 to 1,650 by the end of August, according to the records WIRED obtained.

[…]

 

Source: ICE Is Grabbing Data From Schools and Abortion Clinics | WIRED

Cruz, Warren Intro America Act to Break Up huge advertisers

[…]

The Advertising Middlemen Endangering Rigorous Internet Competition Accountability Act, aka the AMERICA Act. Say what you will about government; Congress’ acronym acumen is untouchable. Introduced by Republican Sen. Mike Lee of Utah, the bill would prohibit companies from owning multiple parts of the digital ad ecosystem if they “process more than $20 billion in digital ad transactions.”

The bill would kneecap Google and Meta, the two biggest players in digital advertising by far, but its provisions seem designed to affect almost every big tech company from Apple to Amazon, too. Google, Meta, Amazon, and Apple did not respond to requests for comment.

The only thing longer than the name of the bill is the stunningly bipartisan list of Senators supporting it: Democrats Amy Klobuchar, Richard Blumenthal, and Elizabeth Warren, and Republicans Ted Cruz, Marco Rubio, Eric Schmitt, Josh Hawley, John Kennedy, Lindsey Graham, J.D. Vance, and Lee. As one observer put it on Twitter, it’s a list of cosponsors “who wouldn’t hold the elevator for each other.” Look at all these little Senators getting along. Isn’t that nice?

[…]

“If enacted into law, this bill would most likely require Google and Facebook to divest significant portions of their advertising businesses—business units that account for or facilitate a large portion of their ad revenue,” Sen. Lee said in a fact sheet about the bill. “Amazon may also have to make divestments, and the bill will impact Apple’s accelerating entry into third-party ads.”

[…]

When you see an ad online, it’s usually the result of a lightspeed bidding war. On one side, the demand side, you have companies who want to buy ads. On the other, the supply side, are apps and websites who have ad space to sell. Advertisers use demand-side tech to compete for the most profitable ad space for their products. Publishers, like Gizmodo.com, use supply-side tech, where they compete to sell the most profitable ads. Sometimes there’s a third piece of tech involved called an “exchange,” which is a service that connects demand-side platforms and supply-side platforms to arrange even more complicated auctions.

Your friends at Google operate the most popular demand-side platform. Google also owns the most popular supply-side platform, and it runs the most popular exchange. And Google is also a publisher, because it sells ad space on places like YouTube and Search. Meta likewise has its hands in multiple corners of the pie. Here’s an analogy: it’s like if the realtor you contracted to represent you in buying a house had also been contracted by the people selling the house. It would be hard to trust that anyone was getting a fair deal, wouldn’t it? That realtor would be in a unique position to jack up the prices for everyone and make extra cash. The dominance is quantifiable—Google itself estimates that it snatches a stunning 35% of every dollar spent on digital ads.

Some people think this is all a little unfair! Unfortunately for Google and Meta, more and more of those people work for the US government.

[…]

Source: Cruz, Warren Intro America Act to Break Up Google, Facebook

This only targets a specific part of the monopolies  / duopolies these companies hold, but it’s hugely bipartisan so we take what we can get.

‘A Blow for Libraries’: Internet Archive Loses Copyright Infringement Lawsuit by money grubbing publishers

A judge ruled against Internet Archive, a free online digital library, on Friday in a lawsuit filed by four top publishers who claimed the company was in violation of copyright laws. The publishers, Hachette Book Group, HarperCollins, John Wiley & Sons, and Penguin Random House filed the lawsuit against Internet Archive in 2020, claiming the company had illegally scanned and uploaded 127 of their books for readers to download for free, detracting from their sales and the authors’ royalties.

U.S. District Court Judge John G. Koeltl ruled in favor of the publishing houses in saying that Internet Archive was making “derivative” works by transforming printed books into e-books and distributing them.

[…]

Koeltl’s decision was in part based on the law that libraries are required to pay publishers for continued use of their digital book copies and are only permitted to lend these digital copies a specified number of times, called controlled digital lending, as agreed by the publisher [not the writer!] before paying to renew its license.

[…]

However, according to the court ruling, Hachette and Penguin provide one or two-year terms to libraries, in which the eBook can be rented an unlimited number of times before the library has to purchase a new license. HarperCollins allows the library to circulate a digital copy 26 times before the license has to be renewed, while Wiley has continued to experiment with several subscription models.

[…]

The judge ruled that because Internet Archive was purchasing the book only once before scanning it and lending each digital copy an unlimited number of times, it is an infringement of copyright and “concerns the way libraries lend eBooks.

[…]

Source: ‘A Blow for Libraries’: Internet Archive Loses Copyright Infringement Lawsuit

The decision was “a blow to all libraries and the communities we serve,” argued Chris Freeland, the director of Open Libraries at the Internet Archive. In a blog post he argued the decision “impacts libraries across the U.S. who rely on controlled digital lending to connect their patrons with books online. It hurts authors by saying that unfair licensing models are the only way their books can be read online. And it holds back access to information in the digital age, harming all readers, everywhere.
The Verge adds that the judge rejected “fair use” arguments which had previously protected a 2014 digital book preservation project by Google Books and HathiTrust:

Koetl wrote that any “alleged benefits” from the Internet Archive’s library “cannot outweigh the market harm to the publishers,” declaring that “there is nothing transformative about [Internet Archive’s] copying and unauthorized lending,” and that copying these books doesn’t provide “criticism, commentary, or information about them.” He notes that the Google Books use was found “transformative” because it created a searchable database instead of simply publishing copies of books on the internet.

Their lending model works like this. They purchase a paper copy of the book, scan it to digital format, and then lend out the digital copy to one person at a time. Their argument is that this is no different than lending out the paper copy that they legally own to one person at a time. It is not as cut and dry as you make it out to be.

Source: Internet Archive Loses in Court. Judge Rules They Can’t Scan and Lend eBooks

Last Monday was the day of the oral arguments in the Big Publishers’ lawsuit against libraries in the form of the Internet Archive. As we noted mid-week, publishers won’t quit until libraries are dead. And they got one step closer to that goal on Friday, when Judge John Koetl wasted no time in rejecting every single one of the Internet Archive’s arguments.

The fact that the ruling came out on the Friday after the Monday oral arguments suggests pretty strongly that Judge Koetl had his mind made up pretty quickly and was ready to kill a library with little delay. Of course, as we noted just last Wednesday, whoever lost at this stage was going to appeal, and the really important stuff was absolutely going to happen at the 2nd Circuit appeals court. It’s just that now the Internet Archives, and a bunch of important copyright concepts, are already starting to be knocked down a few levels.

I’ve heard from multiple people claiming that of course the Internet Archive was going to lose, because it was scanning books (!!) and lending them out and how could that be legal? But, the answer, as we explained multiple times, is that every piece of this copyright puzzle had already been deemed legal.

And the Internet Archive didn’t just jump into this without any thought. Two of the most well known legal scholars regarding copyright and libraries, David Hansen and Kyle Courtney, had written a white paper detailing exactly how and why the approach the Internet Archive took with Controlled Digital Lending easily fit within the existing contours and precedents of copyright law.

But, as we and others have discussed for ages, in the copyright world, there’s a long history of courts ignoring what the law actually says and just coming up with some way to say something is infringement if it feels wrong to them. And that’s what happened here.

A key part of the ruling, as in a large percentage of cases that are about fair use, is looking at whether or not the use of the copy is “transformative.” Judge Koetl is 100% positive it is not transformative.

There is nothing transformative about IA’s copying and unauthorized lending of the Works in Suit.7 IA does not reproduce the Works in Suit to provide criticism, commentary, or information about them. See 17 U.S.C. § 107. IA’s ebooks do not “add[] something new, with a further purpose or different character, altering the [originals] with new expression, meaning or message.” Campbell, 510 U.S. at 579. IA simply scans the Works in Suit to become ebooks and lends them to users of its Website for free. But a copyright holder holds the “exclusive[] right” to prepare, display, and distribute “derivative works based upon the copyrighted work.”

But… there’s a lot more to “transformative” use than simply adding something new or altering the meaning. In many cases, fair use is found in cases where you’re copying the exact same content, but for a different purpose, and the Internet Archive’s usage here seems pretty clearly transformative in that it’s changing the way the book can be consumed to make it easier for libraries to lend it out and patrons to read it. That is, the “transformation” is in the way the book can be lent, not the content of the book.

I know many people find this strange (and the judge did here as well) saying things like “but it’s the whole work.” Or “the use is the same because it’s still just reading the book.” But the Supreme Court already said, quite clearly, that such situations can be fair use, such as in the Sony v. Universal case that decided VCRs were legal, and that time shifting TV shows was clear fair use. In that ruling, they even cite Congress noting that “making a copy of a copyright work for… convenience” can be considered fair use.

Unfortunately, Judge Koetl effectively chops away a huge part of the Sony ruling in insisting that this is somehow different.

But Sony is plainly inapposite. IA is not comparable to the parties in Sony — either to Sony, the alleged contributory copyright infringer, or to the home viewers who used the Betamax machine for the noncommercial, nonprofit activity of watching television programs at home. Unlike Sony, which only sold the machines, IA scans a massive number of copies of books and makes them available to patrons rather than purchasing ebook licenses from the Publishers. IA is also unlike the home viewers in Sony, who engaged in the “noncommercial, nonprofit activity” of viewing at a more convenient time television programs that they had the right to view for free at the time they were originally broadcast. 464 U.S. at 449. The home viewers were not accused of making their television programs available to the general public. Although IA has the right to lend print books it lawfully acquired, it does not have the right to scan those books and lend the digital copies en masse.

But note what the Judge did here. Rather than rely on the text of what the Supreme Court actually said in Sony, he insists that he won’t apply the rules of Sony because the parties are different. But if the basic concepts and actions are covered by the Sony ruling, it seems silly to ignore them here as the judge did.

And the differences highlighted by the court here have no bearing on the actual issues and the specifics of fair use and the law. I mean, first of all, the fact that Koetl claims that the Internet Archive is not engaged in “noncommercial, nonprofit activity” is just weird. The Internet Archive is absolutely engaged in noncommerical, nonprofit activity.

The other distinctions are meaningless as well. No, IA is not building devices for people to buy, but in many ways IA’s position here should be seen as stronger than Sony’s because Sony actually was a commercial operation, and IA is literally acting as a library, increasing the convenience for its patrons, and doing so in a manner that is identical to lending out physical books. Sony created a machine, Betamax, that copied TV shows and allowed those who bought those machines to watch the show at a more convenient time. IA created a machine that copies books, and allows library patrons to access those books in a more convenient way.

Also, the Betamax (and VCR) were just as “available to the general public” as the Internet Archive is. The idea that they are substantially different is just… weird. And strikes me as pretty clearly wrong.

There’s another precedential oddity in the ruling. It relies pretty heavily on the somewhat terrible fair use ruling in the 2nd Circuit in the Warhol Foundation v. Goldsmith case. That case was so terrible that we (at the Copia Institute) weighed in with the Supreme Court to let them know how problematic it was, and the Supreme Court is still sitting on a decision in that case.

Which means the Supreme Court is soon to rule on it, and that could very much change or obliterate the case that Judge Koetl leans on heavily for his ruling.

Here, Judge Koetl repeatedly goes back to the Warhol well to make various arguments, especially around the question of the fourth fair use factor: the effect on the market. To me, this clearly weighs towards fair use, because it’s no different than a regular library. Libraries are allowed to buy (or receive donated) books and lend them out. That’s all the Open Library does here. So to argue there’s a negative impact on the market, the publishers rely on the fact that they’ve been able to twist and bend copyright law so much that they’ve created a new, extortionate market in ebook “licenses,” and then play all sorts of games to force people to buy the books rather than check them out of the library.

Judge Koetl seems particularly worried about how much damage this could do this artificially inflated market:

It is equally clear that if IA’s conduct “becomes widespread, it will adversely affect the potential market for the” Works in Suit. Andy Warhol Found., 11 F.4th at 48. IA could expand the Open Libraries project far beyond the current contributing partners, allowing new partners to contribute many more concurrent copies of the Works in Suit to increase the loan count. New organizations like IA also could emerge to perform similar functions, further diverting potential readers and libraries from accessing authorized library ebooks from the Publishers. This plainly risks expanded future displacement of the Publishers’ potential revenues.

But go back and read that paragraph again, and replace the key words to read that if libraries become widespread, it will adversely affect the potential market for buying books in bookstores… because libraries would be “diverting potential readers” from purchasing physical books, which “plainly risks expanded future displacement of the Publishers’ potential revenues.”

Again, the argument here is effectively that libraries themselves shouldn’t be allowed. And that seems like a problem?

Koetl also falls into the ridiculous trap of saying that “you can’t compete with free” and that libraries will favor CDL-scanned books over licensed ones:

An accused infringer usurps an existing market “where the infringer’s target audience and the nature of the infringing content is the same as the original.” Cariou, 714 F.3d at 709; see also Andy Warhol Found., 11 F.4th at 50. That is the case here. For libraries that are entitled to partner with IA because they own print copies of books in IA’s collection, it is patently more desirable to offer IA’s bootleg ebooks than to pay for authorized ebook licenses. To state the obvious, “[i]t is difficult to compete with a product offered for free.” Sony BMG Music Ent. v. Tenenbaum, 672 F. Supp. 2d 217, 231 (D. Mass. 2009).

Except that’s literally wrong. The licensed ebooks have many features that the scanned ones don’t. And many people (myself included!) prefer to check out licensed ebooks from our local libraries rather than the CDL ones, because they’re more readable. My own library offers the ability to check out books from either one, and defaults to recommending the licensed ebooks, because they’re a better customer experience, which is how tons of products “compete with free” all the time.

I mean, not to be simplistic here, but the bottled water business in the US is an over $90 billion market for something most people can get for free (or effectively free) from the tap. That’s three times the size of the book publishing market. So, uh, maybe don’t say “it’s difficult to compete with free.” Other industries do it just fine. The publishers are just being lazy.

Besides, based on this interpretation of Warhol, basically anyone can destroy fair use by simply making up some new, crazy, ridiculously priced, highly restrictive license that covers the same space as the fair use alternative, and claim that the alternative destroys the “market” for this ridiculous license. That can’t be how fair use works.

Anyway, one hopes first that the Supreme Court rejects the terrible 2nd Circuit ruling in the Warhol Foundation case, and that this in turn forces Judge Koetl to reconsider his argument. But given the pretzel he twisted himself into to ignore the Betamax case, it seems likely he’d still find against libraries like the Internet Archive.

Given that, it’s going to be important that the 2nd Circuit get this one right. As the Internet Archive’s Brewster Kahle said in a statement on the ruling:

“Libraries are more than the customer service departments for corporate database products. For democracy to thrive at global scale, libraries must be able to sustain their historic role in society—owning, preserving, and lending books.

This ruling is a blow for libraries, readers, and authors and we plan to appeal it.”

What happens next is going to be critical to the future of copyright online. Already people have pointed out how some of the verbiage in this ruling could have wide reaching impact on questions about copyright in generative AI products or many other kinds of fair use cases.

One hopes that the panel on the 2nd Circuit doesn’t breezily dismiss these issues like Judge Koetl did.

Source: Publishers Get One Step Closer To Killing Libraries

This money grab by publishers is disgusting, for more information and articles I have referenced them here

Nike Blocks F1 Champ Max Verstappen’s ‘Max 1’ Clothing Brand because they can own words now

[…]

Nike’s argument is that Max 1 is too similar to its longtime “Air Max” shoe line, including other “Max Force 1” products and other variations that include similar keywords. Verstappen had named his line of products after himself and his current racing number but encountered legal trouble soon after launch.

The Benelux Office for Intellectual Property—essentially The Netherlands’ trademark office—issued a report that Verstappen’s Max 1 brand carried a “likelihood of confusion” and posed a risk of consumers believing Max 1 products were associated with Nike.

Nike went as far as claiming that some designs in the Max 1 catalog were too similar to the apparel giant’s, while also alleging that the word MAX was prominently used and likened to Nike apparel. For these reasons, Verstappen was reportedly fined approximately $1,100 according to Express.

[…]

Source: Nike Blocks F1 Champ Max Verstappen’s ‘Max 1’ Clothing Brand

1. What about Pepsi Max

2. What about the name Max being much much older than Nike (so prior use)

3. What about people going around using the word max as in eg ‘that’s the max speed’ or ‘that’s the max that will go in’?

3. What the actual fuck, copyright.

“Click-to-cancel” rule would penalize companies that make you cancel by phone

Canceling a subscription should be just as easy as signing up for the service, the Federal Trade Commission said in a proposed “click-to-cancel” rule announced today. If approved, the plan “would put an end to companies requiring you to call customer service to cancel an account that you opened on their website,” FTC commissioners said.

[…]

The FTC said the proposed rule would be enforced with civil penalties and let the commission return money to harmed consumers.

“The proposal states that if consumers can sign up for subscriptions online, they should be able to cancel online, with the same number of steps. If consumers can open an account over the phone, they should be able to cancel it over the phone, without endless delays,” FTC Chair Lina Khan wrote.

[…]

Source: “Click-to-cancel” rule would penalize companies that make you cancel by phone | Ars Technica

We need this globally!

Dashcam App is driving nazi informer wet dream, Sends Video of You Speeding and other infractions Directly to Police

Speed cameras have been around for a long time and so have dash cams. The uniquely devious idea of combining the two into a traffic hall monitor’s dream device was not a potential reality until recently, though. According to the British Royal Automobile Club, such a combination is coming soon. The app, which is reportedly available in the U.K. as soon as May, will allow drivers to report each other directly to the police with video evidence for things like running red lights, failure to use a blinker, distracted driving, and yes, speeding.

Its founder Oleksiy Afonin recently held meetings with police to discuss how it would work. In a nutshell, video evidence of a crime could be uploaded as soon as the driver who captured it stopped their vehicle to do so safely. According to the RAC, the footage could then be “submitted to the police through an official video portal in less than a minute.” Police reportedly were open to the idea of using the videos as evidence in court.

The RAC questioned whether such an app could be distracting. It certainly opens up a whole new world of crime reporting. In some cities, individuals can report poorly or illegally parked cars to traffic police. Drivers getting into the habit of reporting each other for speeding might be a slippery slope, though. The government would be happy to collect the ticket revenue but the number of citations for alleged speeding could be off the charts with such a system. Anybody can download the app and report someone else, but the evidence would need to be reviewed.

The app, called dashcamUK, will only be available in the United Kingdom, as its name indicates. Thankfully, it doesn’t seem like there are any plans to bring it Stateside. Considering the British public is far more open to the use of CCTV cameras in terms of recording crimes than Americans are, it will likely stay that way for that reason, among others.

Source: Strangers Can Send Video of You Speeding Directly to Police With Dashcam App

TSA Confirms Biometric Scanning Soon Won’t Be Optional Even For Domestic Travelers

[…]

In 2017, the DHS began quietly rolling out its facial recognition program, starting with international airports and aimed mainly at collecting/scanning people boarding international flights. Even in its infancy, the DHS was hinting this was never going to remain solely an international affair.

It made its domestic desires official shortly thereafter, with the TSA dropping its domestic surveillance “roadmap” which now included “expanding biometrics to additional domestic travelers.” Then the DHS and TSA ran silent for a bit, resurfacing in late 2022 with the news it was rolling out its facial recognition system at 16 domestic airports.

As of January, the DHS and TSA were still claiming this biometric ID verification system was strictly opt-in. A TSA rep interviewed by the Washington Post, however, hinted that opting out just meant subjecting yourself to the worst in TSA customer service. Given the options, more travelers would obviously prefer a less brusque/hands-y trip through security checkpoints, ensuring healthy participation in the TSA’s “optional” facial recognition program.

A little more than two months have passed, and the TSA is now informing domestic travelers there will soon be no way to opt out of its biometric program. (via Papers Please)

Speaking at an aviation security panel at South by Southwest, TSA Administrator David Pekoske made these comments:

“We’re upgrading our camera systems all the time, upgrading our lighting systems,” Pekoske said. “(We’re) upgrading our algorithms, so that we are using the very most advanced algorithms and technology we possibly can.”

He said passengers can also choose to opt out of certain screening processes if they are uncomfortable, for now. Eventually, biometrics won’t be optional, he said.

[…]

Pekoske buries the problematic aspects of biometric harvesting in exchange for domestic travel “privileges” by claiming this is all about making things better for passengers.

“It’s critically important that this system has as little friction as it possibly can, while we provide for safety and security,” Pekoske said.

Yes, you’ll get through screening a little faster. Unless the AI is wrong, in which case you’ll be dealing with a whole bunch of new problems most agents likely won’t have the expertise to handle.

[…]

More travelers. Fewer agents. And a whole bunch of screens to interact with. That’s the plan for the nation’s airports and everyone who passes through them.

Source: TSA Confirms Biometric Scanning Soon Won’t Be Optional Even For Domestic Travelers | Techdirt

And way more data that hackers can get their hands on and which the government and people who buy the data can use for 1984 type purposes.

Big Four publishers move to crush the Internet Archive

On Monday four of the largest book publishers asked a New York court to grant summary judgment in a copyright lawsuit seeking to shut down the Internet Archive’s online library and hold the non-profit organization liable for damages.

The lawsuit was filed back June 1, 2020, by the Hachette Book Group, HarperCollins Publishers, John Wiley & Sons and Penguin Random House. In the complaint [PDF], the publishers ask for an injunction that orders “all unlawful copies be destroyed” in the online archive.

The central question in the case, as summarized during oral arguments by Judge John Koeltl, is: does a library have the right to make a copy of a book that it otherwise owns and then lend the ebook it has made without a license from the publisher to patrons of the library?

Publishers object to the Internet Archive’s efforts to scan printed books and make digital copies available online to readers without buying a license from the publisher. The Internet Archive has filed its own motion for summary judgment to have the case dismissed.

The Internet Archive (IA) began its book scanning project back in 2006 and by 2011 started lending out digital copies. It did so, however, in a way that maintained the limitation imposed by physical book ownership.

This activity is fundamentally the same as traditional library lending and poses no new harm to authors or the publishing industry

Its Controlled Digital Lending (CDL) initiative allows only one person to check out the digital copy of each scanned physical book. The idea is that the purchased physical book is being lent in digital form but no extra copies are being lent. IA presently offers 1.3 million books to the public in digital form.

“This activity is fundamentally the same as traditional library lending and poses no new harm to authors or the publishing industry,” IA argued in answer [PDF] to the publisher’s complaint.

“Libraries have collectively paid publishers billions of dollars for the books in their print collections and are investing enormous resources in digitization in order to preserve those texts. CDL helps them take the next step by making sure the public can make full use of the books that libraries have bought.”

The publishers, however, want libraries to pay for ebooks in addition to the physical books they have purchased already. And they claim they have lost millions in revenue, though IA insists there’s no evidence of the presumptive losses.

“Brewster Kahle, Internet Archive’s founder and funder, is on a mission to make all knowledge free. And his goal is to circulate ebooks to billions of people by transforming all library collections from analog to digital,” said Elizabeth McNamara, attorney for the publishers, during Monday’s hearing.

“But IA does not want to pay authors or publishers to realize this grand scheme and they argue it can be excused from paying the customary fees because what they’re doing is in the public interest.”

Kahle in a statement denounced the publishers’ demands. “Here’s what’s at stake in this case: hundreds of libraries contributed millions of books to the Internet Archive for preservation in addition to those books we have purchased,” he said.

“Thousands of donors provided the funds to digitize them.

“The publishers are now demanding that those millions of digitized books, not only be made inaccessible, but be destroyed. This is horrendous. Let me say it again – the publishers are demanding that millions of digitized books be destroyed.

“And if they succeed in destroying our books or even making many of them inaccessible, there will be a chilling effect on the hundreds of other libraries that lend digitized books as we do.”

[…]

Source: Big Four publishers move to crush the Internet Archive • The Register

AI-generated art may be protected, says US Copyright Office – requires meaningful creative input from a human

[…]

AI software capable of automatically generating images or text from an input prompt or instruction has made it easier for people to churn out content. Correspondingly, the USCO has received an increasing number of applications to register copyright protections for material, especially artwork, created using such tools.

US law states that intellectual property can be copyrighted only if it was the product of human creativity, and the USCO only acknowledges work authored by humans at present. Machines and generative AI algorithms, therefore, cannot be authors, and their outputs are not copyrightable.

Digital art, poems, and books generated using tools like DALL-E, Stable Diffusion, Midjourney, ChatGPT, or even the newly released GPT-4 will not be protected by copyright if they were created by humans using only a text description or prompt, USCO director Shira Perlmutter warned.

“If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it,” she wrote in a document outlining copyright guidelines.

“For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology – not the human user.

“Instead, these prompts function more like instructions to a commissioned artist – they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.”

The USCO will consider content created using AI if a human author has crafted something beyond the machine’s direct output. A digital artwork that was formed from a prompt, and then edited further using Photoshop, for example, is more likely to be accepted by the office. The initial image created using AI would not be copyrightable, but the final product produced by the artist might be.

Thus it would appear the USCO is simply saying: yes, if you use an AI-powered application to help create something, you have a reasonable chance at applying for copyright, just as if you used non-AI software. If it’s purely machine-made from a prompt, you need to put some more human effort into it.

In a recent case, officials registered a copyright certificate for a graphic novel containing images created using Midjourney. The overall composition and words were protected by copyright since they were selected and arranged by a human, but the individual images themselves were not.

“In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of ‘mechanical reproduction’ or instead of an author’s ‘own original mental conception, to which [the author] gave visible form’. The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry,” the USCO declared.

Perlmutter urged people applying for copyright protection for any material generated using AI to state clearly how the software was used to create the content, and show which parts of the work were created by humans. If they fail to disclose this information accurately, or try to hide the fact it was generated by AI, USCO will cancel their certificate of registration and their work may not be protected by copyright law.

Source: AI-generated art may be protected, says US Copyright Office • The Register

So very slowly but surely the copyrighters are starting to understand what this newfangled AI technology is all about.

So what happens when an AI edits and AI generated artwork?

SCOPE Europe becomes the accredited monitoring body for a Dutch national data protection code of conduct

[…]SCOPE Europe is now accredited by the Dutch Data Protection Authority as the monitoring body of the Data Pro Code. On this occasion, SCOPE Europe celebrates its success in obtaining its second accreditation and looks forward to continuing its work on fostering trust in the digital economy.

When we were approached by NLdigital, the creators of the Data Pro Code, we knew that taking on the monitoring of a national code of conduct would be an exciting endeavor. As the first-ever accredited monitoring body for a transnational GDPR code of conduct, SCOPE Europe has built unique expertise in the field and are proud, to further apply in the context of another co-regulatory initiative.

The Code puts forward an accessible compliance framework for companies of all sizes, including micro, small and medium enterprises in the Netherlands. With the approval and now the accreditation of its monitoring body, the Data Pro Code will enable data processors to demonstrate GDPR compliance and boost transparency within the digital industry.

Source: PRESS RELEASE: SCOPE Europe becomes the accredited monitoring body for a Dutch national code of conduct: SCOPE Europe bvba/sprl

Anker Eufy security cam ‘stored unique ID’ of everyone filmed in the cloud for other cameras to identify – and for anyone to watch

A lawsuit filed against eufy security cam maker Anker Tech claims the biz assigns “unique identifiers” to the faces of any person who walks in front of its devices – and then stores that data in the cloud, “essentially logging the locations of unsuspecting individuals” when they stroll past.

[…]

All three suits allege Anker falsely represented that its security cameras stored all data locally and did not upload that data to the cloud.

Moore went public with his claims in November last year, alleging video and audio captured by Anker’s eufy security cams could be streamed and watched by any stranger using VLC media player, […]

In a YouTube video, the complaint details, Moore allegedly showed how the “supposedly ‘private,’ ‘stored locally’, ‘transmitted only to you’ doorbell is streaming to the cloud – without cloud storage enabled.”

He claimed the devices were uploading video thumbnails and facial recognition data to Anker’s cloud server, despite his never opting into Anker’s cloud services and said he’d found a separate camera tied to a different account could identify his face with the same unique ID.

The security researcher alleged at the time this showed that Anker was not only storing facial-recog data in the cloud, but also “sharing that back-end information between accounts” lawyers for the two other, near-identical lawsuits claim.

[…]

According to the complaint [PDF], eufy’s security cameras are marketed as “private” and as “local storage only” as a direct alternative to Anker’s competitors that require the use of cloud storage.

Desai’s complaint goes on to claim:

Not only does Anker not keep consumers’ information private, it was further revealed that Anker was uploading facial recognition data and biometrics to its Amazon Web Services cloud without encryption.

In fact, Anker has been storing its customers’ data alongside a specific username and other identifiable information on its AWS cloud servers even when its “eufy” app reflects the data has been deleted. …. Further, even when using a different camera, different username, and even a different HomeBase to “store” the footage locally, Anker is still tagging and linking a user’s facial ID to their picture across its camera platform. Meaning, once recorded on one eufy Security Camera, those same individuals are recognized via their biometrics on other eufy Security Cameras.

In an unrelated incident in 2021, a “software bug” in some of the brand’s 1080p Wi-Fi-connected Eufycams cams sent feeds from some users’ homes to other Eufycam customers, some of whom were in other countries at the time.

[…]

Source: Eufy security cam ‘stored unique ID’ of everyone filmed • The Register

Telehealth startup Cerebral shared millions of patients’ data with advertisers since 2019

Cerebral has revealed it shared the private health information, including mental health assessments, of more than 3.1 million patients in the United States with advertisers and social media giants like Facebook, Google and TikTok.

The telehealth startup, which exploded in popularity during the COVID-19 pandemic after rolling lockdowns and a surge in online-only virtual health services, disclosed the security lapse [This is no security lapse! This is blatant greed served by peddling people’s personal information!] in a filing with the federal government that it shared patients’ personal and health information who used the app to search for therapy or other mental health care services.

Cerebral said that it collected and shared names, phone numbers, email addresses, dates of birth, IP addresses and other demographics, as well as data collected from Cerebral’s online mental health self-assessment, which may have also included the services that the patient selected, assessment responses and other associated health information.

The full disclosure follows:

If an individual created a Cerebral account, the information disclosed may have included name, phone number, email address, date of birth, IP address, Cerebral client ID number, and other demographic or information. If, in addition to creating a Cerebral account, an individual also completed any portion of Cerebral’s online mental health self-assessment, the information disclosed may also have included the service the individual selected, assessment responses, and certain associated health information.

If, in addition to creating a Cerebral account and completing Cerebral’s online mental health self-assessment, an individual also purchased a subscription plan from Cerebral, the information disclosed may also have included subscription plan type, appointment dates and other booking information, treatment, and other clinical information, health insurance/pharmacy benefit information (for example, plan name and group/member numbers), and insurance co-pay amount.

Cerebral was sharing patients’ data with tech giants in real-time by way of trackers and other data-collecting code that the startup embedded within its apps. Tech companies and advertisers, like Google, Facebook and TikTok, allow developers to include snippets of their custom-built code, which allows the developers to share information about their app users’ activity with the tech giants, often under the guise of analytics but also for advertising.

But users often have no idea that they are opting-in to this tracking simply by accepting the app’s terms of use and privacy policies, which many people don’t read.

Cerebral said in its notice to customers — buried at the bottom of its website — that the data collection and sharing has been going on since October 2019 when the startup was founded. The startup said it has removed the tracking code from its apps. While not mentioned, the tech giants are under no obligations to delete the data that Cerebral shared with them.

Because of how Cerebral handles confidential patient data, it’s covered under the U.S. health privacy law known as HIPAA. According to a list of health-related security lapses under investigation by the U.S. Department of Health and Human Services, which oversees and enforces HIPAA, Cerebral’s data lapse is the second-largest breach of health data in 2023.

News of Cerebral’s years-long data lapse comes just weeks after the U.S. Federal Trade Commission slapped GoodRx with a $1.5 million fine and ordered it to stop sharing patients’ health data with advertisers, and BetterHelp was ordered to pay customers $8.5 million for mishandling users’ data.

If you were wondering why startups today should terrify you, Cerebral is just the latest example.

Source: Telehealth startup Cerebral shared millions of patients’ data with advertisers | TechCrunch

 

Holy shit: German Courts saying DNS Service (Quad9) Is Implicated In Any Copyright Infringement At The Domains It Resolves

Back in September 2021 Techdirt covered an outrageous legal attack by Sony Music on Quad9, a free, recursive, anycast DNS platform. Quad9 is part of the Internet’s plumbing: it converts domain names to numerical IP addresses. It is operated by the Quad9 Foundation, a Swiss public-benefit, not-for-profit organization. Sony Music says that Quad9 is implicated in alleged copyright infringement on the sites it resolves. That’s clearly ridiculous, but unfortunately the Regional Court of Hamburg agreed with Sony Music’s argument, and issued an interim injunction against Quad9. The German Society for Civil Rights (Gesellschaft für Freiheitsrechte e.V. or “GFF”) summarizes the court’s thinking:

In its interim injunction the Regional Court of Hamburg asserts a claim against Quad9 based on the principles of the German legal concept of “Stoererhaftung” (interferer liability), on the grounds that Quad9 makes a contribution to a copyright infringement that gives rise to liability, in that Quad9 resolves the domain name of website A into the associated IP address. The German interferer liability has been criticized for years because of its excessive application to Internet cases. German lawmakers explicitly abolished interferer liability for access providers with the 2017 amendment to the German Telemedia Act (TMG), primarily to protect WIFI operators from being held liable for costs as interferers.

As that indicates, this is a case of a law that is a poor fit for modern technology. Just as the liability no longer applies to WIFI operators, who are simply providing Internet access, so the German law should also not catch DNS resolvers like Quad9. The GFF post notes that Quad9 has appealed to the Hamburg Higher Regional Court against the lower court’s decision. Unfortunately, another regional court has just handed down a similar ruling against the company, reported here by Heise Online (translation by DeepL):

the Leipzig Regional Court has sentenced the Zurich-based DNS service Quad9. On pain of an administrative fine of up to 250,000 euros or up to 2 years’ imprisonment, the small resolver operator was prohibited from translating two related domains into the corresponding IP addresses. Via these domains, users can find the tracks of a Sony music album offered via Shareplace.org.

The GFF has already announced that it will be appealing along with Quad9 to the Dresden Higher Regional Court against this new ruling. It says that the Leipzig Regional Court has made “a glaring error of judgment”, and explains:

If one follows this reasoning, the copyright liability of completely neutral infrastructure services like Quad9 would be even stricter than that of social networks, which fall under the infamous Article 17 of the EU Copyright Directive,” criticizes Felix Reda, head of the Control © project of the Society for Civil Rights. “The [EU] Digital Services Act makes it unequivocally clear that the liability rules for Internet access providers apply to DNS services. We are confident that this misinterpretation of European and German legal principles will be overturned by the Court of Appeals.”

Let’s hope so. If it isn’t, we can expect companies providing the Internet’s basic infrastructure in the EU to be bombarded with demands from the copyright industry and others for domains to be excluded from DNS resolution. The likely result is that perfectly legal sites and their holdings will be ghosted by DNS companies, which will prefer to err on the side of caution rather than risk becoming the next Quad9.

Source: Another German Court Says The DNS Service Quad9 Is Implicated In Any Copyright Infringement At The Domains It Resolves | Techdirt

There are some incredibly stupid judges and lawyers out there

YouTube Chills the Darned Hell Out On Its Cursing Policy, but you still can’t fucking say fuck

Google’s finally rolling back its unpopular decree against any kinds of profanity in videos, making it harder for any creators used to offering colorful sailor’s speech in videos from monetizing content on behalf of its beloved ad partners. The only thing is, Google still seems to think the “f-word” is excessively harsh language, so sorry Samuel L. Jackson, those motha-[redacted] snakes are still liable for less ad dollars on this motha-[redacted] plane.

On Tuesday, Google updated its support page to offer up an olive branch to crass creators upset that their potty mouths were resulting in their videos being demonetized. Now, the company clarified that use of “moderate” profanity at any time in a video is now eligible for ad revenue.

However, the company seemed to be antagonistic to “stronger profanity” like “the f-word,” AKA “fuck.” You can’t say “fuck” in the first seven seconds or repeatedly throughout a video or else you will receive “limited ads.” Putting words like “fuck” into a title or thumbnail will result in no ad content.

What is allowed are words like “hell” or “damn” in a title or thumbnail. Words like “bitch,” “douchebag,” “asshole,” and “shit” are considered “moderate profanity, so that’s fine to use frequently in a video. But “fuck,” dear god, will hurt advertiser’s poor virgin ears. YouTube has been extremely sensitive to what its advertisers are saying. For instance the platform came close to pulling big money-making ads over creepy pasta content during the “Elsagate” scandal.

The changes also impacted videos which used music tracks in the background. YouTube is now saying any use of “moderate” or “strong” profanity in background music is eligible for full ad revenue.

Back in November, YouTube changed its creator monetization policy, calling it guidelines for “advertiser-friendly content.” The company decreed that any video with a thumbnail or title containing obscene language or “adult material” wouldn’t receive any ad revenue. YouTube also said it would demonetize violent content such as dead bodies without context, or virtual violence directed at a “real, named person.” Fair enough, but then YouTube said it would demonetize any video which used profanity “in the first eight seconds of the video.”

[…]

Source: YouTube Chills the Hell Out On Its Cursing Policy

What the shitting fuck, Google. Americans. I thought it was the land of the free, once?

When Given The Choice, Most Authors Reject Excessively Long Copyright Terms

Recently, Walled Culture mentioned the problem of orphan works. These are creations, typically books, that are still covered by copyright, but unavailable because the original publisher or distributor has gone out of business, or simply isn’t interested in keeping them in circulation. The problem is that without any obvious point of contact, it’s not possible to ask permission to re-publish or re-use it in some way.

It turns out that there is another serious issue, related to that of orphan works. It has been revealed by the New York Public Library, drawing on work carried out as a collaboration between the Internet Archive and the US Copyright Office. According to a report on the Vice Web site:

the New York Public Library (NYPL) has been reviewing the U.S. Copyright Office’s official registration and renewals records for creative works whose copyrights haven’t been renewed, and have thus been overlooked as part of the public domain.

The books in question were published between 1923 and 1964, before changes to U.S. copyright law removed the requirement for rights holders to renew their copyrights. According to Greg Cram, associate general counsel and director of information policy at NYPL, an initial overview of books published in that period shows that around 65 to 75 percent of rights holders opted not to renew their copyrights.

Since most people today will naturally assume that a book published between 1923 and 1964 is still in copyright, it is unlikely anyone has ever tried to re-publish or re-use material from this period. But this new research shows that the majority of these works are, in fact, already in the public domain, and therefore freely available for anyone to use as they wish.

That’s a good demonstration of how the dead hand of copyright stifles fresh creativity from today’s writers, artists, musicians and film-makers. They might have drawn on all these works as a stimulus for their own creativity, but held back because they have been brainwashed by the copyright industry into thinking that everything is in copyright for inordinate lengths of time. As a result, huge numbers of books that are freely available according to the law remain locked up with a kind of phantom copyright that exists only in people’s minds, infected as they are with copyright maximalist propaganda.

The other important lesson to be drawn from this work by the NYPL is that given the choice, the majority of authors didn’t bother renewing their copyrights, presumably because they didn’t feel they needed to. That makes today’s automatic imposition of exaggeratedly-long copyright terms not just unnecessary but also harmful in terms of the potential new works, based on public domain materials, that have been lost as a result of this continuing over-protection.

Source: When Given The Choice, Most Authors Reject Excessively Long Copyright Terms | Techdirt

Texas Bill Would Make ISPs censor any abortion information

Last week, Texas introduced a bill that would make it illegal for internet service providers to let users access information about how to get abortion pills. The bill, called the Women and Child Safety Act, would also criminalize creating, editing, or hosting a website that helps people seek abortions.

If the bill passes, internet service providers (ISPs) will be forced to block websites “operated by or on behalf of an abortion provider or abortion fund.” ISPs would also have to filter any website that helps people who “provide or aid or abet elective abortions” in almost any way, including raising money.

[…]

Five years ago, a bill like this would violate federal law. Remember Net Neutrality? Net Neutrality forced ISPs to act like phone companies, treating all traffic the same with almost no ability to limit or filter the content traveling on their networks. But Net Neutrality was repealed in 2018, essentially reclassifying internet service as a luxury with little regulator oversight, and upending consumers’ right to free access of the web.

[…]

Source: Texas Bill Would Bar ISPs From Hosting Abortion Websites, Info