EPIC urges FTC to investigate Grindr’s data practices

On Wednesday, EPIC filed a complaint with the US government watchdog over Grindr’s “apparent failure to safeguard users’ sensitive personal data.” This includes both present and past users who have since deleted their accounts, according to the complaint. Despite promising in its privacy policy to delete personal info if customers remove their account, Grindr allegedly retained and disclosed some of this data to third parties.

Considering that people trust the dating app with a ton of very sensitive information — this includes their sexual preferences, self-reported HIV status, chat history, photos including nudes, and location information — “learning that Grindr breaks the promises it makes to users would likely affect a consumer’s decision regarding whether to use Grindr,” the complaint states [PDF].

Grindr, for its part, says privacy is of the uppermost importance to it, and that these “unfounded” claims stem from allegations made by a disgruntled ex-worker. So that’s all right then.

“Privacy is a top priority for Grindr and the LGBTQ+ community we serve, and we have adopted industry-leading privacy practices and tools to protect and empower our users,” a spokesperson told The Register.

“We are sorry that the former employee behind the unfounded allegations in today’s request is dissatisfied with his departure from the company; we wish him the best.”

The former employee in question is Grindr’s ex-chief privacy officer Ron De Jesus. In June, De Jesus filed a wrongful termination lawsuit [PDF] against his former bosses that also accused the dating app of violating privacy laws.

According to the lawsuit, De Jesus was “leading the charge to keep Grindr compliant with state, national, and international laws” after Norway’s data protection agency fined the dating app biz about $12 million in December 2021 and a Wall Street Journal article in May 2022 accused the application developer of selling users’ location data.

But despite De Jesus’ attempts, “Grindr placed profit over privacy and got rid of Mr De Jesus for his efforts and reports,” the lawsuit alleges.

EPIC’s complaint, which highlights De Jesus’ allegations, asks the FTC to look into potential violations of privacy law, including detection data retention and disclosure practices.

It also accuses Grindr of violating the Health Breach Notification Rule (HNBR). The dating app is subject to the HNBR because it asks users to self-report health data including HIV status, last-tested date, and vaccination status. By sharing these records with third parties and retaining health data after users deleted their accounts, Grindr allegedly breached the HNBR, EPIC says.

The privacy advocates at EPIC want the FTC to make Grindr comply with the laws and stop any “unlawful or impermissible” data retention practices. Additionally, the complaint calls on the federal agency to force Grindr to notify any users’ whose data was misused, and impose fines against the dating app for any violations of the HBNR.

Source: EPIC urges FTC to investigate Grindr’s data practices • The Register

Singapore plans to scan your face instead of your passport

[…] “Singapore will be one of the first few countries in the world to introduce automated, passport-free immigration clearance,” said minister for communications and information Josephine Teo in a wrap-up speech for the bill. Teo did concede that Dubai had such clearance for select enrolled travelers, but there was no assurance of other countries planning similar actions.

[…]

Another consideration for why passports will likely remain relevant in Singapore airports is for checking in with airlines. Airlines check passports not just to confirm identity, but also visas and more. Airlines are often held responsible for stranded passengers so will likely be required to confirm travelers have the documentation required to enter their destination.

The Register asked Singapore Airlines to confirm if passports will still be required on the airline after the implementation of biometric clearance. They deferred to Changi’s operator, Changi Airport Group (CAG), which The Reg also contacted – and we will update if a relevant reply arises.

What travelers will see is an expansion of a program already taking form. Changi airport currently uses facial recognition software and automated clearance for some parts of immigration.

[…]

Passengers who pre-submit required declarations online can already get through Singapore’s current automated immigration lanes in 20 to 30 seconds once they arrive to the front of the queue. It’s one reason Changi has a reputation for being quick to navigate.

[…]

According to CAG, the airport handled 5.12 million passenger movements in June 2023 alone. This figure is expected to only increase as it currently stands at 88 percent of pre-COVID levels and the government sees such efficiency as critical to managing the impending growth.

But the reasoning for biometric clearance go beyond a boom in travelers. With an aging population and shrinking workforce, Singapore’s Immigration & Checkpoints Authority (ICA) will have “to cope without a significant increase in manpower,” said Teo.

Additionally, security threats including pandemics and terrorism call for Singapore to “go upstream” on immigration measures, “such as the collection of advance passenger and crew information, and entry restrictions to be imposed on undesirable foreigners, even before they arrive at our shores,” added the minister.

This collection and sharing of biometric information is what enables the passport-free immigration process – passenger and crew information will need to be disclosed to the airport operator to use for bag management, access control, gate boarding, duty-free purchases, as well as tracing individuals within the airport for security purposes.

The shared biometrics will serve as a “single token of authentication” across all touch points.

Members of Singapore’s parliament have raised concerns about shifting to universal automated clearance, including data privacy, and managing technical glitches.

According to Teo, only Singaporean companies will be allowed ICA-related IT contracts, vendors will be given non-disclosure agreements, and employees of such firms must undergo security screening. Traveler data will be encrypted and transported through data exchange gateways.

As for who will protect the data, that role goes to CAG, with ICA auditing its compliance.

In case of disruptions that can’t be handled by an uninterruptible power supply, off-duty officers will be called in to go back to analog.

And even though the ministry is pushing universal coverage, there will be some exceptions, such as those who are unable to provide certain biometrics or are less digitally literate. Teo promised their clearance can be done manually by immigration officers.

Source: Singapore plans to scan your face instead of your passport • The Register

Data safety is a real issue here – how long will the data be collected and for what other purposes will it be used?

UK passport and immigration images database could be repurposed to catch shoplifters

Britain’s passport database could be used to catch shoplifters, burglars and other criminals under urgent plans to curb crime, the policing minister has said.

Chris Philp said he planned to integrate data from the police national database (PND), the Passport Office and other national databases to help police find a match with the “click of one button”.

But civil liberty campaigners have warned the plans would be an “Orwellian nightmare” that amount to a “gross violation of British privacy principles”.

Foreign nationals who are not on the passport database could also be found via the immigration and asylum biometrics system, which will be part of an amalgamated system to help catch thieves.

[…]

Until the new platform is created, he said police forces should search each database separately.

[…]

Emmanuelle Andrews, policy and campaigns manager at the campaign group, said: “Time and time again the government has relied on the social issue of the day to push through increasingly authoritarian measures. And that’s just what we’re seeing here with these extremely worrying proposals to encourage the police to scan our faces as we go to buy a pint of milk and trawl through our personal information.

“By enabling the police to use private dashcam footage, as well as the immigration and asylum system, and passport database, the government are turning our neighbours, loved ones, and public service officials into border guards and watchmen.

[…]

Silkie Carlo, director of Big Brother Watch, said: “Philp’s plan to subvert Brits’ passport photos into a giant police database is Orwellian and a gross violation of British privacy principles. It means that over 45 million of us with passports who gave our images for travel purposes will, without any kind of consent or the ability to object, be part of secret police lineups.

“To scan the population’s photos with highly inaccurate facial recognition technology and treat us like suspects is an outrageous assault on our privacy that totally overlooks the real reasons for shoplifting. Philp should concentrate on fixing broken policing rather than building an automated surveillance state.

“We will look at every possible avenue to challenge this Orwellian nightmare.”

Source: UK passport images database could be used to catch shoplifters | Police | The Guardian

Also, time and again we have seen that centralised databases are a really really bad idea – the data gets stolen and misused by the operators.

Firefox now has private browser-based website translation – no cloud servers required

Web browsers have had tools that let you translate websites for years. But they typically rely on cloud-based translation services like Google Translate or Microsoft’s Bing Translator.

The latest version of Mozilla’s Firefox web browser does things differently. Firefox 118 brings support for Fullpage Translation, which can translate websites entirely in your browser. In other words, everything happens locally on your computer without any data sent to Microsoft, Google, or other companies.

Here’s how it works. Firefox will notice when you visit a website in a supported language that’s different from your default language, and a translate icon will show up in the address bar.

Tap that icon and you’ll see a pop-up window that asks what languages you’d like to translate from and to. If the browser doesn’t automatically detect the language of the website you’re visiting, you can set these manually.

Then click the “Translate” button, and a moment later the text on the page should be visible in your target language. If you’d prefer to go back to the original language, just tap the translate icon again and choose the option that says “show original.”

You can also tap the settings icon in the translation menu and choose to “always translate” or “never translate” a specific language so that you won’t have to manually invoke the translation every time you visit sites in that language.

Now for the bad news: Firefox Fullpage Translation only supports 9 languages so far:

  • Bulgarian
  • Dutch
  • English
  • French
  • German
  • Italian
  • Polish
  • Portuguese
  • Spanish

[…]

Source: Firefox 118 brings browser-based website translation (no cloud servers required… for a handful of supported languages) – Liliputing

Feds Probing Tesla For Lying About EV Ranges, Bullshitting Customers Who Complained

Back in July, Reuters released a bombshell report documenting how Tesla not only spent a decade falsely inflating the range of their EVs, but created teams dedicated to bullshitting Tesla customers who called in to complain about it. If you recall, Reuters noted how these teams would have a little, adorable party every time they got a pissed off user to cancel a scheduled service call. Usually by lying to them:

“Inside the Nevada team’s office, some employees celebrated canceling service appointments by putting their phones on mute and striking a metal xylophone, triggering applause from coworkers who sometimes stood on desks. The team often closed hundreds of cases a week and staffers were tracked on their average number of diverted appointments per day.”

The story managed to stay in the headlines for all of a day or two, quickly supplanted by gossip surrounding a non-existent Elon Musk Mark Zuckerberg fist fight.

But here in reality, Tesla’s routine misrepresentation of their product (and almost joyous gaslighting of their paying customers) has caught the eye of federal regulators, who are now investigating the company for fraudulent behavior:

“federal prosecutors have opened a probe into Tesla’s alleged range-exaggerating scheme, which involved rigging its cars’ software to show an inflated range projection that would then abruptly switch to an accurate projection once the battery dipped below 50% charged. Tesla also reportedly created an entire secret “diversion team” to dissuade customers who had noticed the problem from scheduling service center appointments.”

This pretty clearly meets the threshold definition of “unfair and deceptive” under the FTC Act, so this shouldn’t be that hard of a case. Of course, whether it results in any sort of meaningful penalties or fines is another matter entirely. It’s very clear Musk historically hasn’t been very worried about what’s left of the U.S. regulatory and consumer protection apparatus holding him accountable for… anything.

Still, it’s yet another problem for a company that’s facing a flood of new competitors with an aging product line. And it’s another case thrown in Tesla’s lap on top of the glacially-moving inquiry into the growing pile of corpses caused by obvious misrepresentation of under-cooked “self driving” technology, and an investigation into Musk covertly using Tesla funds to build himself a glass mansion.

Source: Feds Probing Tesla For Lying About EV Ranges, Bullshitting Customers Who Complained | Techdirt

Philips Hue / Signify Ecosystem: ‘Collapsing Into Stupidity’

The Philips Hue ecosystem of home automation devices is “collapsing into stupidity,” writes Rachel Kroll, veteran sysadmin and former production engineer at Facebook. “Unfortunately, the idiot C-suite phenomenon has happened here too, and they have been slowly walking down the road to full-on enshittification.” From her blog post: I figured something was up a few years ago when their iOS app would block entry until you pushed an upgrade to the hub box. That kind of behavior would never fly with any product team that gives a damn about their users — want to control something, so you start up the app? Forget it, we are making you placate us first! How is that user-focused, you ask? It isn’t.

Their latest round of stupidity pops up a new EULA and forces you to take it or, again, you can’t access your stuff. But that’s just more unenforceable garbage, so who cares, right? Well, it’s getting worse.

It seems they are planning on dropping an update which will force you to log in. Yep, no longer will your stuff Just Work across the local network. Now it will have yet another garbage “cloud” “integration” involved, and they certainly will find a way to make things suck even worse for you. If you have just the lights and smart outlets, Kroll recommends deleting the units from the Hue Hub and adding them to an IKEA Dirigera hub. “It’ll run them just fine, and will also export them to HomeKit so that much will keep working as well.” That said, it’s not a perfect solution. You will lose motion sensor data, the light level, the temperature of that room, and the ability to set custom behaviors with those buttons.

“Also, there’s no guarantee that IKEA won’t hop on the train to sketchville and start screwing over their users as well,” adds Kroll.

Source: Is the Philips Hue Ecosystem ‘Collapsing Into Stupidity’? – Slashdot

Chip firm Rivos countersues Apple, alleges illegal contracts and unecessary court cases

A chip startup and several of its employees are being sued by Apple for theft of trade secrets and breach of contract and filed a countersuit.

Rivos was sued [PDF] by Apple early last year over claims it lured away a gaggle of Apple employees working on the system-on-chip (SoC) designs like those in its Mac and iPhone devices. Rivos and several of its employees who previously worked at Apple were named in the suit, and six of them participated with Rivos in the countersuit [PDF] filed in the District Court for the Northern District of California on Friday.

In the original lawsuit, Apple accused Rivos, which was founded in 2021 to develop RISC-V SoCs for servers, of a “coordinated campaign to target Apple employees with access to Apple proprietary and trade secret information about Apple’s SoC designs.” When informed of confidentiality and intellectual property agreements (IPAs), Apple claimed Rivos never responded.

Instead, “after accepting their offers from Rivos, some of these employees took gigabytes of sensitive SoC specifications and design files during their last days of employment with Apple,” lawyers for Cupertino alleged.

A judge in the lawsuit dismissed [PDF] claims of trade secret theft against Rivos and two of its employees in August with leave to amend, but let other Defend Trade Secrets Act claims against individual employees, as well as the breach of contract claims, stand.

Apple has tried this before and failed, reasons Rivos

In its countersuit, Rivos and six of its employees argue that, rather than competing, “Apple has resorted to trying to thwart emerging startups through anticompetitive measures, including illegally restricting employee mobility.”

Methods Apple has used to stymie employee mobility include the aforementioned IPAs, which Rivos lawyers argue violate California’s Business and Professions Code rules voiding contracts that restrict an individual’s ability to engage in a lawful business, profession or trade.

Under California law, Rivos lawyers claim, such a violation means Apple is engaging in unfair and unlawful business practices that have caused injury to Rivos through the need to fight such a lengthy and, if the contracts are unenforceable, unnecessary court battle.

“Apple’s actions not only violate the laws and public policy of the State of California, but also undermine the free and open competition that has made the state the birthplace of countless innovative businesses,” Rivos’s lawyers argue in the lawsuit.

Rivos also claims that Apple’s method of applying its IPA is piecemeal and often abused to allow Apple future legal opportunities.

“Even when Apple knows its employees are leaving to work somewhere that Apple (rightly or wrongly) perceives as a competitive threat, it does not consistently conduct exit interviews or give employees any meaningful instruction about what they should do with supposedly ‘confidential’ Apple material upon leaving,” the countersuit claims.

“Apple lets these employees walk out the door with material they may have inadvertently ‘retained’ simply by using the Apple systems (such as iCloud or iMessage) that Apple effectively mandates they use as part of their work.”

Rivos argues in its filing that Apple tried this exact same scheme before and it failed then too.

That incident involved Arm-compatible chipmaker Nuvia, which was founded by former Apple chip chief Gerard Williams in 2019. Apple sued Williams that same year over claims he violated his contract with Apple and tried to poach employees for his startup.

Williams unsurprisingly made the same claims as Rivos – that the Apple contracts were unenforceable under California law – and after a couple years of stalling, Apple finally abandoned its suit against Williams with little justification.

The iGiant didn’t respond to our questions about the countersuit.

Source: Chip firm Rivos countersues Apple, alleges illegal contracts • The Register

Philips Hue will force users to upload their data to Hue cloud – changing their TOS after you bought the product for not needing an account

Today’s story is about Philips Hue by Signify. They will soon start forcing accounts on all users and upload user data to their cloud. For now, Signify says you’ll still be able to control your Hue lights locally as you’re currently used to, but we don’t know if this may change in the future. The privacy policy allows them to store the data and share it with partners.

[…]

When you open the Philips Hue app you will now be prompted with a new message: Starting soon, you’ll need to be signed in.

[…]

So today, you can choose to not share your information with Signify by not creating an account. But this choice will soon be taken away and all users need to share their data with Philips Hue.

Confirming the news

I didn’t want to cry wolf, so I decided to verify the above statement with Signify. They sadly confirmed:

Twitter conversation with Philips Hue (source: Twitter)

The policy they are referring to is their privacy policy (April 2023 edition, download version).

[…]

When asked what drove this change, the answer is the usual: security. Well Signify, you know what keeps user data even more secure? Not uploading it all to your cloud.

[…]

As a user, we encourage you to reach out to Signify support and voice your concern.

NOTE: Their support form doesn’t work. You can visit their Facebook page though

Dear Signify, please reconsider your decision and do not move forward with it. You’ve reversed bad decisions before. People care about privacy and forcing accounts will hurt the brand in the long term. The pain caused by this is not worth the gain.

Source: Philips Hue will force users to upload their data to Hue cloud

No, Philips / Signify – I have used these devices for years without having to have an account or be connected to the internet. It’s one of the reasons I bought into Hue. Making us give up data to use something we bought after we bought it is a dangerous decision considering the private and exploitable nature of the data, as well as greedy and rude.

T-Mobile US exposes some customer data, but don’t say breach

T-Mobile US has had another bad week on the infosec front – this time stemming from a system glitch that exposed customer account data, followed by allegations of another breach the carrier denied.

According to customers who complained of the issue on Reddit and X, the T-Mobile app was displaying other customers’ data instead of their own – including the strangers’ purchase history, credit card information, and address.

This being T-Mobile’s infamously leaky US operation, people immediately began leaping to the obvious conclusion: another cyber attack or breach.

“There was no cyber attack or breach at T-Mobile,” the telco assured us in an emailed statement. “This was a temporary system glitch related to a planned overnight technology update involving limited account information for fewer than 100 customers, which was quickly resolved.”

Note, as Reddit poster Jman100_JCMP did, T-Mobile means fewer than 100 customers had their data exposed – but far more appear to have been able to view those 100 customers’ data.

As for the breach, the appearance of exposed T-Mobile data was alleged by malware repository vx-underground’s X (Twitter) account. The Register understands T-Mobile examined the data and determined that independently owned T-Mobile dealer, Connectivity Source, was the source – resulting from a breach it suffered in April. We understand T-Mobile believes vx-underground misinterpreted a data dump.

Connectivity Source was indeed the subject of a breach in April, in which an unknown attacker made off with employee data including names and social security numbers – around 17,835 of them from across the US, where Connectivity appears to do business exclusively as a white-labelled T-Mobile US retailer.

Looks like the carier really dodged the bullet on this one – there’s no way Connectivity Source employees could be mistaken for its own staff.

T-Mobile US has already experienced two prior breaches this year, but that hasn’t imperilled the biz much – its profits have soared recently and some accompanying sizable layoffs will probably keep things in the black for the foreseeable future.

Source: T-Mobile US exposes some customer data, but don’t say breach • The Register

EU reinstates $400 million fine on Intel for blocking sales of competing chips

The European Commission has imposed a €376.36 million ($400 million) fine on Intel for blocking the sales of devices powered by its competitors’ x86 CPUs. This brings one part of the company’s long-running antitrust court battle with the European authority to a close. If you’ll recall, the Commission slapped the chipmaker with a record-breaking €1.06 billion ($1.13 billion) fine in 2009 after it had determined that Intel abused its dominant position in the market. ye

It found back then that the company gave hidden rebates and incentives to manufacturers like HP, Dell and Lenovo for buying all or almost all their processors from Intel. The Commission also found that Intel paid manufacturers to delay or to completely cease the launch of products powered by its rivals’ CPUs “naked restrictions.” Other times, Intel apparently paid companies to limit those products’ sales channels. The Commission calls these actions “naked restrictions.”

[…]

In its announcement, the European Commission gave a few examples of how Intel hindered the sales of competing products. It apparently paid HP between November 2002 and May 2005 to sell AMD-powered business desktops only to small- and medium-sized enterprises and via direct distribution channels. It also paid Acer to delay the launch of an AMD-based notebook from September 2003 to January 2004. Intel paid Lenovo to push back the launch of AMD-based notebooks for half a year, as well.

The Commission has since appealed the General Court’s decision to dismiss the part of the case related to the rebates Intel offered its clients. Intel, however, did not lodge an appeal for the court’s ruling on naked restrictions, setting it in stone. “With today’s decision, the Commission has re-imposed a fine on Intel only for its naked restrictions practice,” the European authority wrote. “The fine does not relate to Intel’s conditional rebates practice. The fine amount, which is based on the same parameters as the 2009 Commission’s decision, reflects the narrower scope of the infringement compared to that decision.” Seeing as the rebates part of the case is under appeal, Intel could still pay the rest of the fine in the future.

Source: EU reinstates $400 million fine on Intel for blocking sales of competing chips

Dutch privacy watchdog SDBN sues twitter for collecting and selling data via Mohub (wordfeud, duolingo, etc) without notifying users

The Dutch Data Protection Foundation (SDBN) wants to enforce a mass claim for 11 million people through the courts against social media company X, the former Twitter. Between 2013 and 2021, that company owned the advertising platform MoPub, which, according to the privacy foundation, illegally traded in data from users of more than 30,000 free apps such as Wordfeud, Buienradar and Duolingo.

SDBN has been trying to reach an agreement with X since November last year, but according to the foundation, without success. That is why SDBN is now starting a lawsuit at the Rotterdam court. Central to this is MoPub’s handling of personal data such as religious beliefs, sexual orientation and health. In addition to compensation, SDBN wants this data to be destroyed.

The foundation also believes that users are entitled to profit contributions. A lot of money can be made by sharing personal data with thousands of companies, says SDBN chairman Anouk Ruhaak. Although she says it is difficult to find out exactly which companies had access to the data. “By holding X. Corp liable, we hope not only to obtain compensation for all victims, but also to put a stop to this type of practice,” said Ruhaak. “Unfortunately, these types of companies often only listen when it hurts financially.”

Source: De Ondernemer | Privacystichting SDBN wil via rechter massaclaim bij…

Join the claim here

The maestro: The man who built the biggest match-fixing ring in tennis

On the morning of his arrest, Grigor Sargsyan was still fixing matches. Four cellphones buzzed on his nightstand with calls and messages from around the world.

Sargsyan was sprawled on a bed in his parents’ apartment, making deals between snatches of sleep. It was 3 a.m. in Brussels, which meant it was 8 a.m. in Thailand. The W25 Hua Hin tournament was about to start.

Sargsyan was negotiating with professional tennis players preparing for their matches, athletes he had assiduously recruited over years. He needed them to throw a game or a set — or even just a point — so he and a global network of associates could place bets on the outcomes.

That’s how Sargsyan had become rich. As gambling on tennis exploded into a $50 billion industry, he had infiltrated the sport, paying pros more to lose matches, or parts of matches, than they could make by winning tournaments.

Sargsyan had crisscrossed the globe building his roster, which had grown to include more than 180 professional players across five continents. It was one of the biggest match-fixing rings in modern sports, large enough to earn Sargsyan a nickname whispered throughout the tennis world: the Maestro.

This Washington Post investigation of Sargsyan’s criminal enterprise, and how the changing nature of gambling has corrupted tennis, is based on dozens of interviews with players, coaches, investigators, tennis officials and match fixers.

[…]

Source: The maestro: The man who built the biggest match-fixing ring in tennis

Google Chrome’s Privacy Sandbox: any site can now query all your habits

[…]

Specifically, the web giant’s Privacy Sandbox APIs, a set of ad delivery and analysis technologies, now function in the latest version of the Chrome browser. Website developers can thus write code that calls those APIs to deliver and measure ads to visitors with compatible browsers.

That is to say, sites can ask Chrome directly what kinds of topics you’re interested in – topics automatically selected by Chrome from your browsing history – so that ads personalized to your activities can be served. This is supposed to be better than being tracked via third-party cookies, support for which is being phased out. There are other aspects to the sandbox that we’ll get to.

While Chrome is the main vehicle for Privacy Sandbox code, Microsoft Edge, based on the open source Chromium project, has also shown signs of supporting the technology. Apple and Mozilla have rejected at least the Topics API for interest-based ads on privacy grounds.

[…]

“The Privacy Sandbox technologies will offer sites and apps alternative ways to show you personalized ads while keeping your personal information more private and minimizing how much data is collected about you.”

These APIs include:

  • Topics: Locally track browsing history to generate ads based on demonstrated user interests without third-party cookies or identifiers that can track across websites.
  • Protected Audience (FLEDGE): Serve ads for remarketing (e.g. you visited a shoe website so we’ll show you a shoe ad elsewhere) while mitigating third-party tracking across websites.
  • Attribution Reporting: Data to link ad clicks or ad views to conversion events (e.g. sales).
  • Private Aggregation: Generate aggregate data reports using data from Protected Audience and cross-site data from Shared Storage.
  • Shared Storage: Allow unlimited, cross-site storage write access with privacy-preserving read access. In other words, you graciously provide local storage via Chrome for ad-related data or anti-abuse code.
  • Fenced Frames: Securely embed content onto a page without sharing cross-site data. Or iframes without the security and privacy risks.

These technologies, Google and industry allies believe, will allow the super-corporation to drop support for third-party cookies in Chrome next year without seeing a drop in targeted advertising revenue.

[…]

“Privacy Sandbox removes the ability of website owners, agencies and marketers to target and measure their campaigns using their own combination of technologies in favor of a Google-provided solution,” James Rosewell, co-founder of MOW, told The Register at the time.

[…]

Controversially, in the US, where lack of coherent privacy rules suit ad companies just fine, the popup merely informs the user that these APIs are now present and active in the browser but requires visiting Chrome’s Settings page to actually manage them – you have to opt-out, if you haven’t already. In the EU, as required by law, the notification is an invitation to opt-in to interest-based ads via Topics.

Source: How Google Chrome’s Privacy Sandbox works and what it means • The Register

Google taken to court in NL for large scale privacy breaches

The Foundation for the Protection of Privacy Interests and the Consumers’ Association are taking the next step in their fight against Google. The tech company is being taken to court today for ‘large-scale privacy violations’.

The proceedings demand, among other things, that Google stop its constant surveillance and sharing of personal data through online advertising auctions and also pay damages to consumers. Since the announcement of this action on May 23, 2023, more than 82,000 Dutch people have already joined the mass claim.

According to the organizations, Google is acting in violation of Dutch and European privacy legislation. The tech giant collects users’ online behavior and location data on an immense scale through its services and products. Without providing enough information or having obtained permission. Google then shares that data, including highly sensitive personal data about health, ethnicity and political preference, for example, with hundreds of parties via its online advertising platform.

Google is constantly monitoring everyone. Even when using third-party cookies – which are invisible – Google continues to collect data through other people’s websites and apps, even when someone is not using its products or services. This enables Google to monitor almost the entire internet behavior of its users.

All these matters have been discussed with Google, to no avail.

The Foundation for the Protection of Privacy Interests represents the interests of users of Google’s products and services living in the Netherlands who have been harmed by privacy violations. The foundation is working together with the Consumers’ Association in the case against Google. Consumers’ Association Claimservice, a partnership between the Consumers’ Association and ConsumersClaim, processes the registrations of affiliated victims.

More than 82,000 consumers have already registered for the Google claim. They demand compensation of 750 euros per participant.

A lawsuit by the American government against Google starts today in the US . Ten weeks have been set aside for this. This mainly revolves around the power of Google’s search engine.

Essentially, Google is accused of entering into exclusive agreements to guarantee the use of its search engine. These are agreements that prevent alternative search engines from being pre-installed, or from Google’s search app being removed.

Source: Google voor de rechter gedaagd wegens ‘grootschalige privacyschendingen’ – Emerce (NL)

Microsoft to stop forcing Windows 11 users into Edge in EU countries

Microsoft will finally stop forcing Windows 11 users in Europe into Edge if they click a link from the Windows Widgets panel or from search results. The software giant has started testing the changes to Windows 11 in recent test builds of the operating system, but the changes are restricted to countries within the European Economic Area (EEA).

“In the European Economic Area (EEA), Windows system components use the default browser to open links,” reads a change note from a Windows 11 test build released to Dev Channel testers last month. I asked Microsoft to comment on the changes and, in particular, why they’re only being applied to EU countries. Microsoft refused to comment.

Microsoft has been ignoring default browser choices in its search experience in Windows 10 and the taskbar widget that forces users into Edge if they click a link instead of their default browser. Windows 11 continued this trend, with search still forcing users into Edge and a new dedicated widgets area that also ignores the default browser setting.

[…]

Source: Microsoft to stop forcing Windows 11 users into Edge in EU countries – The Verge

Big Tech failed to police Russian disinformation: EU study

[…]

The independent study of the DSA’s risk management framework published by the EU’s executive arm, the European Commission, concluded that commitments by social media platforms to mitigate the reach and influence of global online disinformation campaigns have been generally unsuccessful.

The reach of Kremlin-sponsored disinformation has only increased since the major platforms all signed a voluntary Code of Practice on Disinformation in mid-2022.

“In theory, the requirements of this voluntary Code were applied during the second half of 2022 – during our period of study,” the researchers said. We’re sure you’re just as shocked as we are that social media companies failed to uphold a voluntary commitment.

Between January and May of 2023, “average engagement [of pro-Kremlin accounts rose] by 22 percent across all online platforms,” the study said. By absolute numbers, the report found, Meta led the pack on engagement with Russian misinformation. However, the increase was “largely driven by Twitter, where engagement grew by 36 percent after CEO Elon Musk decided to lift mitigation measures on Kremlin-backed accounts,” researchers concluded. Twitter, now known as X, pulled out of the disinformation Code in May.

Across the platforms studied – Facebook, Instagram, Telegram, TikTok, Twitter and YouTube – Kremlin-backed accounts have amassed some 165 million followers and have had their content viewed at least 16 billion times “in less than a year.” None of the platforms we contacted responded to questions.

[…]

The EU’s Digital Services Act and its requirements that VLOPs (defined by the Act as companies large enough to reach 10 percent of the EU, or roughly 45 million people) police illegal content and disinformation became enforceable late last month.

Under the DSA, VLOPs are also required “to tackle the spread of illegal content, online disinformation and other societal risks,” such as, say, the massive disinformation campaign being waged by the Kremlin since Putin decided to invade Ukraine last year.

[…]

Now that VLOPs are bound by the DSA, will anything change? We asked the European Commission if it can take any enforcement actions, or whether it’ll make changes to the DSA to make disinformation rules tougher, but have yet to hear back.

Two VLOPs are fighting their designation: Amazon and German fashion retailer Zalando. The two orgs claim that as retailers, they shouldn’t be considered in the same category as Facebook, Pinterest, and Wikipedia.

[…]

Source: Big Tech failed to police Russian disinformation: EU study • The Register

TV Museum Will Die in 48 Hours Unless Sony Retracts YouTube Copyright Strikes on 40 – 60 year old TV shows

Rick Klein and his team have been preserving TV adverts, forgotten tapes, and decades-old TV programming for years. Now operating as a 501(c)(3) non-profit, the Museum of Classic Chicago Television has called YouTube home since 2007. However, copyright notices sent on behalf of Sony, protecting TV shows between 40 and 60 years old, could shut down the project in 48 hours.

[…]

After being reborn on YouTube as The Museum of Classic Chicago Television (MCCTv), the last sixteen years have been quite a ride. Over 80 million views later, MCCTv is a much-loved 501(c)(3) non-profit Illinois corporation but in just 48 hours, may simply cease to exist.

In a series of emails starting Friday and continuing over the weekend, Klein began by explaining his team’s predicament, one that TorrentFreak has heard time and again over the past few years. Acting on behalf of a copyright owner, in this case Sony, India-based anti-piracy company Markscan hit the MCCTv channel with a flurry of copyright claims. If these cannot be resolved, the entire project may disappear.

[…]

No matter whether takedowns are justified, unjustified (Markscan hit Sony’s own website with a DMCA takedown recently), or simply disputed, getting Markscan’s attention is a lottery at best, impossible at worst. In MCCTv’s short experience, nothing has changed.

“Our YouTube channel with 150k subscribers is in danger of being terminated by September 6th if I don’t find a way to resolve these copyright claims that Markscan made,” Klein told TorrentFreak on Friday.

“At this point, I don’t even care if they were issued under authorization by Sony or not – I just need to reach a live human being to try to resolve this without copyright strikes. I am willing to remove the material manually to get the strikes reversed.”

[…]

Complaints Targeted TV Shows 40 to 60 years old

[…]

Two episodes of the TV series Bewitched dated 1964 aired on ABC Network and almost sixty years later, archive copies of those transmissions were removed from YouTube for violating Sony copyrights, with MCCTv receiving a strike.

[…]

Given that copyright law locks content down for decades, Klein understands that can sometimes cause issues, although 16 years on YouTube suggests that the overwhelming majority of rightsholders don’t consider his channel a threat. If they did, the option to monetize the recordings can be an option.

No Competition For Commercial Offers

Why most rightsholders have left MCCTv alone is hard to say; perhaps some see the historical value of the channel, maybe others don’t know it exists. At least in part, Klein believes the low quality of the videos could be significant.

“These were relatively low picture quality broadcast examples from various channels from various years at least 30-40 years ago, with the original commercial breaks intact. Also mixed in with these were examples of ’16mm network prints’ which are surviving original film prints that were sent out to TV stations back in the day from when the show originally aired. In many cases they include original sponsorship notices, original network commercials, ‘In Color’ notices, etc.,” he explains.

[…]

Klein says the team is happy to comply with Sony’s wishes and they hope that given a little leeway, the project won’t be consigned to history. Perhaps Sony will recall the importance of time-shifting while understanding that time itself is running out for The Museum of Classic Chicago Television.

Source: TV Museum Will Die in 48 Hours Unless Sony Retracts YouTube Copyright Strikes * TorrentFreak

Mozilla investigates 25 major car brands and finds privacy is shocking

[…]

The foundation, the Firefox browser maker’s netizen-rights org, assessed the privacy policies and practices of 25 automakers and found all failed its consumer privacy tests and thereby earned its Privacy Not Included (PNI) warning label.

If you care even a little about privacy, stay as far away from Nissan’s cars as you possibly can

In research published Tuesday, the org warned that manufacturers may collect and commercially exploit much more than location history, driving habits, in-car browser histories, and music preferences from today’s internet-connected vehicles. Instead, some makers may handle deeply personal data, such as – depending on the privacy policy – sexual activity, immigration status, race, facial expressions, weight, health, and even genetic information, the Mozilla team found.

Cars may collect at least some of that info about drivers and passengers using sensors, microphones, cameras, phones, and other devices people connect to their network-connected cars, according to Mozilla. And they collect even more info from car apps – such as Sirius XM or Google Maps – plus dealerships, and vehicle telematics.

Some car brands may then share or sell this information to third parties. Mozilla found 21 of the 25 automakers it considered say they may share customer info with service providers, data brokers, and the like, and 19 of the 25 say they can sell personal data.

More than half (56 percent) also say they share customer information with the government or law enforcement in response to a “request.” This isn’t necessarily a court-ordered warrant, and can also be a more informal request.

And some – like Nissan – may also use this private data to develop customer profiles that describe drivers’ “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.”

Yes, you read that correctly. According to Mozilla’s privacy researchers, Nissan says it can infer how smart you are, then sell that assessment to third parties.

[…]

Nissan isn’t the only brand to collect information that seems completely irrelevant to the vehicle itself or the driver’s transportation habits.

Kia mentions sex life,” Caltrider said. “General Motors and Ford both mentioned race and sexual orientation. Hyundai said that they could share data with government and law enforcement based on formal or informal requests. Car companies can collect even more information than reproductive health apps in a lot of ways.”

[…]

the Privacy Not Included team contacted Nissan and all of the other brands listed in the research: that’s Lincoln, Mercedes-Benz, Acura, Buick, GMC, Cadillac, Fiat, Jeep, Chrysler, BMW, Subaru, Dacia, Hyundai, Dodge, Lexus, Chevrolet, Tesla, Ford, Honda, Kia, Audi, Volkswagen, Toyota and Renault.

Only three – Mercedes-Benz, Honda, and Ford – responded, we’re told.

“Mercedes-Benz did answer a few of our questions, which we appreciate,” Caltrider said. “Honda pointed us continually to their public privacy documentation to answer your questions, but they didn’t clarify anything. And Ford said they discussed our request internally and made the decision not to participate.”

This makes Mercedes’ response to The Register a little puzzling. “We are committed to using data responsibly,” a spokesperson told us. “We have not received or reviewed the study you are referring to yet and therefore decline to comment to this specifically.”

A spokesperson for the four Fiat-Chrysler-owned brands (Fiat, Chrysler, Jeep, and Dodge) told us: “We are reviewing accordingly. Data privacy is a key consideration as we continually seek to serve our customers better.”

[…]

The Mozilla Foundation also called out consent as an issue some automakers have placed in a blind spot.

“I call this out in the Subaru review, but it’s not limited to Subaru: it’s the idea that anybody that is a user of the services of a connected car, anybody that’s in a car that uses services is considered a user, and any user is considered to have consented to the privacy policy,” Caltrider said.

Opting out of data collection is another concern.

Tesla, for example, appears to give users the choice between protecting their data or protecting their car. Its privacy policy does allow users to opt out of data collection but, as Mozilla points out, Tesla warns customers: “If you choose to opt out of vehicle data collection (with the exception of in-car Data Sharing preferences), we will not be able to know or notify you of issues applicable to your vehicle in real time. This may result in your vehicle suffering from reduced functionality, serious damage, or inoperability.”

While technically this does give users a choice, it also essentially says if you opt out, “your car might become inoperable and not work,” Caltrider said. “Well, that’s not much of a choice.”

[…]

Source: Mozilla flunks 25 major car brands for data privacy fails • The Register

Australian Government, Of All Places, Says Age Verification Is A Privacy & Security Nightmare

In the past I’ve sometimes described Australia as the land where internet policy is completely upside down. Rather than having a system that protects intermediaries from liability for third party content, Australia went the opposite direction. Rather than recognizing that a search engine merely links to content and isn’t responsible for the content at those links, Australia has said that search engines can be held liable for what they link to. Rather than protect the free expression of people on the internet who criticize the rich and powerful, Australia has extremely problematic defamation laws that result in regular SLAPP suits and suppression of speech. Rather than embrace encryption that protects everyone’s privacy and security, Australia requires companies to break encryption, insisting only criminals use it.

It’s basically been “bad internet policy central,” or the place where good internet policy goes to die.

And, yet, there are some lines that even Australia won’t cross. Specifically, the Australian eSafety commission says that it will not require adult websites to use age verification tools, because it would put the privacy and security of Australians’ data at risk. (For unclear reasons, the Guardian does not provide the underlying documents, so we’re fixing that and providing both the original roadmap and the Australian government’s response

[…]

Of course, in France, the Data Protection authority released a paper similarly noting that age verification was a privacy and security nightmare… and the French government just went right on mandating the use of the technology. In Australia, the eSafety Commission pointed to the French concerns as a reason not to rush into the tech, meaning that Australia took the lessons from French data protection experts more seriously than the French government did.

And, of course, here in the US, the Congressional Research Service similarly found serious problems with age verification technology, but it hasn’t stopped Congress from releasing a whole bunch of “save the children” bills that are built on a foundation of age verification.

[…]

Source: Australian Government, Of All Places, Says Age Verification Is A Privacy & Security Nightmare | Techdirt

OpenAI disputes authors’ claims that every ChatGPT response is a derivative work, it’s transformative

This week, OpenAI finally responded to a pair of nearly identical class-action lawsuits from book authors

[…]

In OpenAI’s motion to dismiss (filed in both lawsuits), the company asked a US district court in California to toss all but one claim alleging direct copyright infringement, which OpenAI hopes to defeat at “a later stage of the case.”

The authors’ other claims—alleging vicarious copyright infringement, violation of the Digital Millennium Copyright Act (DMCA), unfair competition, negligence, and unjust enrichment—need to be “trimmed” from the lawsuits “so that these cases do not proceed to discovery and beyond with legally infirm theories of liability,” OpenAI argued.

OpenAI claimed that the authors “misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

According to OpenAI, even if the authors’ books were a “tiny part” of ChatGPT’s massive data set, “the use of copyrighted materials by innovators in transformative ways does not violate copyright.”

[…]

The purpose of copyright law, OpenAI argued, is “to promote the Progress of Science and useful Arts” by protecting the way authors express ideas, but “not the underlying idea itself, facts embodied within the author’s articulated message, or other building blocks of creative,” which are arguably the elements of authors’ works that would be useful to ChatGPT’s training model. Citing a notable copyright case involving Google Books, OpenAI reminded the court that “while an author may register a copyright in her book, the ‘statistical information’ pertaining to ‘word frequencies, syntactic patterns, and thematic markers’ in that book are beyond the scope of copyright protection.”

[…]

Source: OpenAI disputes authors’ claims that every ChatGPT response is a derivative work | Ars Technica

So the authors are saying that if you read their book and then are inspired by it, you can’t use that memory – any of it – to write another book. Which also means that you presumably wouldn’t be able to use any words at all, as they are all copyrighted entities which have inspired you in the past as well.

Companies are recording your conversations whilst you are on hold with them

Is Achmea or Bol.com customer service putting you on hold? Then everything you say can still be heard by some of their employees. This is evident from research by Radar.

When you call customer service, you often hear: “Please note: this conversation may be recorded for training purposes.” Nothing special. But if you call the insurer Zilveren Kruis, you will also hear: “Note: Even if you are on hold, our quality employees can hear what you are saying.”

Striking, because the Dutch Data Protection Authority states that recording customers ‘on hold’ is not allowed. Companies are allowed to record the conversation, for example to conclude a contract or to improve the service.

Both mortgage provider Woonfonds and insurers Zilveren Kruis, De Friesland and Interpolis confirm that the recording tape continues to run if you are on hold with them, while this violates privacy rules.

Bol.com also continues to eavesdrop on you while you are on hold, the webshop confirms. She also gives the same reason for this: “It is technically not possible to temporarily stop the recording and start it again when the conversation starts again.”KLM, Ziggo, Eneco, Vattenfall, T-Mobile, Nationale Nederlanden, ASR, ING and Rabobank say they don’t answer their customers while they are on hold.

Source: Diverse bedrijven waaronder bol.com nemen gesprekken ‘in de wacht’ op – Emerce

Tornado Cash ‘laundered over $1B’ in criminal cryptocurrency

Two founders of Tornado Cash were formally accused by US prosecutors today of laundering more than $1 billion in criminal proceeds through their cryptocurrency mixer.

As well as unsealing an indictment against the pair on Wednesday, the Feds also arrested one of them, 34-year-old Roman Storm, in his home state of Washington, and hauled him into court. Fellow founder and co-defendant Roman Semenov, a 35-year-old Russian citizen, is still at large.

As a cryptocurrency mixer, Tornado Cash is appealing to cybercriminals as it offers to provide them a degree of anonymity.

[…]

Tornado Cash was sanctioned by Uncle Sam a little over a year ago for helping North Korea’s Lazarus Group scrub funds stolen in the Axie Infinity hack. Additionally, the US Treasury Department said Tornado Cash was used to launder funds stolen in the Nomad bridge and Harmony bridge heists, both of which were also linked to Lazarus.

Storm and Semenov were both charged with conspiracy to commit money laundering and conspiracy to commit sanctions violations, each carrying a maximum penalty of 20 years in prison. A third charge, conspiracy to operate an unlicensed money transmitting business, could net the pair up to an additional five years upon conviction.

In the unsealed indictment [PDF], prosecutors said Tornado Cash boasted about its anonymizing features and that it could make money untraceable, and that Storm and Semenov refused to implement changes that would dial back Tornado’s thief-friendly money-laundering capabilities and bring it in line with financial regulations.

“Tornado Cash failed to establish an effective [anti money laundering] program or engage in any [know your customer] efforts,” Dept of Justice lawyers argued. Changes made publicly to make it appear as if Tornado Cash was legally compliant, the DoJ said, were laughed off as ineffective in private messages by the charged pair.

“While publicly claiming to offer a technically sophisticated privacy service, Storm and Semenov in fact knew that they were helping hackers and fraudsters conceal the fruits of their crimes,” said US Attorney Damian Williams. “Today’s indictment is a reminder that money laundering through cryptocurrency transactions violates the law, and those who engage in such laundering will face prosecution.”

What of the mysterious third founder?

While Storm and Semenov were the ones named on the rap sheet, they aren’t the only people involved with, or arrested over, their involvement in Tornado Cash. A third unnamed and uncharged person mentioned in the DoJ indictment referred to as “CC-1” is described as one of the three main people behind the sanctioned service.

Despite that, the Dept of Justice didn’t announce any charges against CC-1.

Clues point to CC-1 potentially being Alexey Persev, a Russian software developer linked to Tornado Cash who was arrested in The Netherlands shortly after the US sanctioned the crypto-mixing site. Persev was charged in that Euro nation with facilitating money laundering and concealing criminal financial flows, and is now out of jail on monitored home release awaiting trial.

Persev denies any wrongdoing, and claimed he wasn’t told why he was being detained. His defenders argued he shouldn’t be held accountable for writing Tornado Cash code since he didn’t do any of the alleged money laundering himself.

It’s not immediately clear if Pertsev is CC-1, nor is it clear why CC-1 wasn’t charged. We put those questions to the DoJ, and haven’t heard back.

Source: Tornado Cash ‘laundered over $1B’ in criminal cryptocurrency

Our Inability To Recognize That Remixing Art Is Transformative Is Now Leading To Today’s AI/Copyright Mess

If you’ve never watched it, Kirby Ferguson’s “Everything is a Remix” series (which was recently updated from the original version that came out years ago) is an excellent look at how stupid our copyright laws are, and how they have really warped our view of creativity. As the series makes clear, creativity is all about remixing: taking inspiration and bits and pieces from other parts of culture and remixing them into something entirely new. All creativity involves this in some manner or another. There is no truly unique creativity.

And yet, copyright law assumes the opposite is true. It assumes that most creativity is entirely unique, and when remix and inspiration get too close, the powerful hand of the law has to slap people down.

[…]

It would have been nice if society had taken this issue seriously back then, recognized that “everything is a remix,” and that encouraging remixing and reusing the works of others to create something new and transformative was not just a good thing, but one that should be supported. If so, we might not be in the utter shitshow that is the debate over generative art from AI these days, in which many creators are rushing to AI to save them, even though that’s not what copyright was designed to do, nor is it a particularly useful tool in that context.

[…]

The moral panic is largely an epistemological crisis: We don’t have a socially acceptable status for the legibility of the remix as art-in-it’s-own-right. Instead of properly appreciating the remix and the art of the DJ, the remix, or the meme cultures, we have shoehorned all the cultural properties associated onto an 1800’s sheet music publishing -based model of artistic credibility. The fit was never really good, but no-one really cared because the scenes were small, underground and their breaking the rules was largely out-of-sight.

[…]

AI art tools are simply resurfacing an old problem we left behind unresolved during the 1980’s to early 2000’s. Now it’s time for us to blow the dust off these old books and apply what was learned to the situation we have at our hands now.

We should not forget the modern electronic dance music industry has already developed models that promote new artists via remixes of their work from more established artists. These real-world examples combined with the theoretical frameworks above should help us to explore a refreshed model of artistic credibility, where value is assigned to both the original artists and the authors of remixers

[…]

Art, especially popular forms of it, has always been a lot about transformation: Taking what exists and creating something that works in this particular context. In forms of art emphasizing the distinctiveness of the original less, transformation becomes the focus of the artform instead.

[…]

There are a lot of questions about how that would actually work in practice, but I do think this is a useful framework for thinking about some of these questions, challenging some existing assumptions, and trying to rethink the system into one that is actually helping creators and helping to enable more art to be created, rather than trying to leverage a system originally developed to provide monopolies to gatekeepers into one that is actually beneficial to the public who want to experience art, and creators who wish to make art.

Source: Our Inability To Recognize That Remixing Art Is Transformative Is Now Leading To Today’s AI/Copyright Mess | Techdirt

AI-generated art cannot be copyrighted, judge rules – Only humans can be creative, apparently

Copyright issues have dogged AI since chatbot tech gained mass appeal, whether it’s accusations of entire novels being scraped to train ChatGPT or allegations that Microsoft and GitHub’s Copilot is pilfering code.

But one thing is for sure after a ruling [PDF] by the United States District Court for the District of Columbia – AI-created works cannot be copyrighted.

You’d think this was a simple case, but it has been rumbling on for years at the hands of one Stephen Thaler, founder of Missouri neural network biz Imagination Engines, who tried to copyright artwork generated by what he calls the Creativity Machine, a computer system he owns. The piece, A Recent Entrance to Paradise, pictured below, was reproduced on page 4 of the complaint [PDF]:

The US Copyright Office refused the application because copyright laws are designed to protect human works. “The office will not register works ‘produced by a machine or mere mechanical process’ that operates ‘without any creative input or intervention from a human author’ because, under the statute, ‘a work must be created by a human being’,” the review board told Thaler’s lawyer after his second attempt was rejected last year.

This was not a satisfactory response for Thaler, who then sued the US Copyright Office and its director, Shira Perlmutter. “The agency actions here were arbitrary, capricious, an abuse of discretion and not in accordance with the law, unsupported by substantial evidence, and in excess of Defendants’ statutory authority,” the lawsuit claimed.

But handing down her ruling on Friday, Judge Beryl Howell wouldn’t budge, pointing out that “human authorship is a bedrock requirement of copyright” and “United States copyright law protects only works of human creation.”

“Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them,” she wrote.

Though she acknowledged the need for copyright to “adapt with the times,” she shut down Thaler’s pleas by arguing that copyright protection can only be sought for something that has “an originator with the capacity for intellectual, creative, or artistic labor. Must that originator be a human being to claim copyright protection? The answer is yes.”

Unsurprisingly Thaler’s legal people took an opposing view. “We strongly disagree with the district court’s decision,” University of Surrey Professor Ryan Abbott told The Register.

“In our view, the law is clear that the American public is the primary beneficiary of copyright law, and the public benefits when the generation and dissemination of new works are promoted, regardless of how those works are made. We do plan to appeal.”

This is just one legal case Thaler is involved in. Earlier this year, the US Supreme Court also refused to hear arguments that AI algorithms should be recognized by law as inventors on patent filings, once again brought by Thaler.

He sued the US Patent and Trademark Office (USPTO) in 2020 because patent applications he had filed on behalf of another of his AI systems, DABUS, were rejected. The USPTO refused to accept them as it could only consider inventions from “natural persons.”

That lawsuit was quashed then was taken to the US Court of Appeals, where it lost again. Thaler’s team finally turned to the Supreme Court, which wouldn’t give it the time of day.

When The Register asked Thaler to comment on the US Copyright Office defeat, he told us: “What can I say? There’s a storm coming.”

Source: AI-generated art cannot be copyrighted, judge rules • The Register

One Fan Ports Abandoned PS1 Classic ‘WipeOut’, Tells Sony To either let it be or make it available for the world again. Sony will because copyright and assholes.

More and more, as the video game industry matures, we find ourselves talking about game preservation and the disappearing culture of some older games as the original publishers abandon them. Often times leaving the public with no actual legit method for purchasing these old games, copyright law conspires with the situation to also prevent the public itself from clawing back its half of the copyright bargain. The end results are studios and publishers that have enjoyed the fruits of copyright law for a period of time, only for that cultural output to be withheld from the public later on. By any plain reading of American copyright law, that outcome shouldn’t be acceptable.

When it comes to one classic PlayStation 1 title, it seems that one enterprising individual has very much refused to accept this outcome. A fan of the first-party Sony title WipeOut, an exclusive to the PS1, has ported the game such that it can be played in a web browser. And, just to drive the point home, they have essentially dared Sony to do something about it.

“Either let it be, or shut this thing down and get a real remaster going,” he told Sony in a recent blog post (via VGC). Despite the release of the PlayStation Classic, 2017’s Wipeout Omega Collection, and PS Plus adding old PS1 games to PS5 like Twisted Metal, there’s no way to play the original WipeOut on modern consoles and experience the futuristic racer’s incredible soundtrack and neo-Tokyo aesthetic in all their glory. So fans have taken it upon themselves to make the Psygnosis-developed hit accessible on PC.

As Dominic Szablewski details in his post and in a series of videos detailing this labor of love, getting this all to work took a great deal of unraveling in the source code. The whole thing was a mess primarily because every iteration of the game simply had new code layered on top of the last iteration, meaning that there was a lot of onion-peeling to be done to make this all work.

But work it does!

After a lot of detective work and elbow grease, Szablewski managed to resurrect a modified playable version of the game with an uncapped framerate that looks crisp and sounds great. He still recommends two other existing PC ports over his own, WipeOut Phantom Edition and an unnamed project by a user named XProger. However, those don’t come with the original source code, the legality of which he admits is “questionable at best.”

But again, what is the public supposed to do here? The original game simply can’t be bought legitimately and hasn’t been available for some time. Violating copyright law certainly isn’t the right answer, but neither is allowing a publisher to let cultural output go to rot simply because it doesn’t want to do anything about it.

“Sony has demonstrated a lack of interest in the original WipeOut in the past, so my money is on their continuing absence,” Szablewski wrote. “If anyone at Sony is reading this, please consider that you have (in my opinion) two equally good options: either let it be, or shut this thing down and get a real remaster going. I’d love to help!”

Sadly, I’m fairly certain I know how this story will end.

Source: One Fan Ports Abandoned PS1 Classic ‘WipeOut’, Dares Sony To Do Something About It | Techdirt