FCC fines America’s largest wireless carriers $200 million for selling customer location data without permission

The Federal Communications Commission has slapped the largest mobile carriers in the US with a collective fine worth $200 million for selling access to their customers’ location information without consent. AT&T was ordered to pay $57 million, while Verizon has to pay $47 million. Meanwhile, Sprint and T-Mobile are facing a penalty with a total amount of $92 million together, since the companies had merged two years ago. The FCC conducted an in-depth investigation into the carriers’ unauthorized disclosure and sale of subscribers’ real-time location data after their activities came to light in 2018.

To sum up the practice in the words of FCC Commissioner Jessica Rosenworcel: The carriers sold “real-time location information to data aggregators, allowing this highly sensitive data to wind up in the hands of bail-bond companies, bounty hunters, and other shady actors.” According to the agency, the scheme started to unravel following public reports that a sheriff in Missouri was tracking numerous individuals by using location information a company called Securus gets from wireless carriers. Securus provides communications services to correctional facilities in the country.

While the carriers eventually ceased their activities, the agency said they continued operating their programs for a year after the practice was revealed and after they promised the FCC that they would stop selling customer location data. Further, they carried on without reasonable safeguards in place to ensure that the legitimate services using their customers’ information, such as roadside assistance and medical emergency services, truly are obtaining users’ consent to track their locations.

Source: FCC fines America’s largest wireless carriers $200 million for selling customer location data

Helldivers 2 PC players suddenly have to link to a PSN account and they’re not being chill about it

Nintendo sent a Digital Millennium Copyright Act (DMCA) notice for over 8,000 GitHub repositories hosting code from the Yuzu Switch emulator, which the Zelda maker previously described as enabling “piracy at a colossal scale.” The sweeping takedown comes two months after Yuzu’s creators quickly settled a lawsuit with Nintendo and its notoriously trigger-happy legal team for $2.4 million.

GamesIndustry.biz first reported on the DMCA notice, affecting 8,535 GitHub repos. Redacted entities representing Nintendo assert that the Yuzu source code contained in the repos “illegally circumvents Nintendo’s technological protection measures and runs illegal copies of Switch games.”

GitHub wrote on the notice that developers will have time to change their content before it’s disabled. In keeping with its developer-friendly approach and branding, the Microsoft-owned platform also offered legal resources and guidance on submitting DMCA counter-notices.

Nintendo’s legal blitz, perhaps not coincidentally, comes as game emulators are enjoying a resurgence. Last month, Apple loosened its restrictions on retro game players in the App Store (likely in response to regulatory threats), leading to the Delta emulator establishing itself as the de facto choice and reaching the App Store’s top spot. Nintendo may have calculated that emulators’ moment in the sun threatened its bottom line and began by squashing those that most immediately imperiled its income stream.

Sadly, Nintendo’s largely undefended legal assault against emulators ignores a crucial use for them that isn’t about piracy. Game historians see the software as a linchpin of game preservation. Without emulators, Nintendo and other copyright holders could make a part of history obsolete for future generations, as their corresponding hardware will eventually be harder to come by.

[…]

This has royally pissed off PC players, though it’s worth noting that it’s free to make a PSN account. This has led to review bombing on Steam and many promises to abandon the game when the linking becomes a requirement, according to a report by Kotaku. The complaints range from frustration over adding yet another barrier to entry after downloading an 80GB game to fears that the PSN account would likely be hacked. While it is true that Sony was the target of a huge hack that impacted 77 million PSN accounts, that was back in 2011. Obama was still in his first term. Also worth noting? Steam was hacked in 2011, impacting 35 million accounts.

[…]

Source: Helldivers 2 PC players suddenly have to link to a PSN account and they’re not being chill about it

Nintendo blitzes GitHub with over 8,000 emulator-related DMCA takedowns

Nintendo sent a Digital Millennium Copyright Act (DMCA) notice for over 8,000 GitHub repositories hosting code from the Yuzu Switch emulator, which the Zelda maker previously described as enabling “piracy at a colossal scale.” The sweeping takedown comes two months after Yuzu’s creators quickly settled a lawsuit with Nintendo and its notoriously trigger-happy legal team for $2.4 million.

GamesIndustry.biz first reported on the DMCA notice, affecting 8,535 GitHub repos. Redacted entities representing Nintendo assert that the Yuzu source code contained in the repos “illegally circumvents Nintendo’s technological protection measures and runs illegal copies of Switch games.”

GitHub wrote on the notice that developers will have time to change their content before it’s disabled. In keeping with its developer-friendly approach and branding, the Microsoft-owned platform also offered legal resources and guidance on submitting DMCA counter-notices.

Nintendo’s legal blitz, perhaps not coincidentally, comes as game emulators are enjoying a resurgence. Last month, Apple loosened its restrictions on retro game players in the App Store (likely in response to regulatory threats), leading to the Delta emulator establishing itself as the de facto choice and reaching the App Store’s top spot. Nintendo may have calculated that emulators’ moment in the sun threatened its bottom line and began by squashing those that most immediately imperiled its income stream.

Sadly, Nintendo’s largely undefended legal assault against emulators ignores a crucial use for them that isn’t about piracy. Game historians see the software as a linchpin of game preservation. Without emulators, Nintendo and other copyright holders could make a part of history obsolete for future generations, as their corresponding hardware will eventually be harder to come by.

Source: Nintendo blitzes GitHub with over 8,000 emulator-related DMCA takedowns

Russia arrests in absentia former world chess champion Garry Kasparov on foreign agent and terrorist charges

Russia has arrested Garry Kasparov and charged him in connection with foreign agent and terrorist charges – much to the former chess champion’s amusement.

The city court in Syktyvkar, the largest city in Russia‘s northwestern Komi region, announced it had arrested the grandmaster in absentia alongside former Russian parliament member Gennady Gudkov, Ivan Tyutrin co-founder of the Free Russia Forum – which has been designated as an ‘undesirable organisation in the country – as well as former environmental activist Yevgenia Chirikova.

All were charged with setting up a terrorist society, according to the court’s press service. As all were charged in their absence, none were physically held in custody.

[…]

Kasparov responded to the court’s bizarre arrest statement in an April 24 post shared on X, formerly Twitter. “In absentia is definitely the best way I’ve ever been arrested,” he said. “Good company, as well. I’m sure we’re all equally honoured that Putin’s terror state is spending time on this that would otherwise go persecuting and murdering.”

Kasparov has found himself in Russian President Vladimir Putin’s firing line after he voiced his opposition to the country’s leader. He has also pursued pro-democracy initiatives in Russia. But he felt unable to continue living in Russia after he was jailed and allegedly beaten by police in 2012, according to the Guardian. He was granted Croatian citizenship in 2014 following repeated difficulties in Russia.

[…]

Source: Russia arrests former world chess champion Garry Kasparov on foreign agent and terrorist charges – World News – Mirror Online

People Are Slowly Realizing Their Auto Insurance Rates Are Skyrocketing Because Their Car Is Covertly Spying On Them

Last month the New York Times’ Kashmir Hill published a major story on how GM collects driver behavior data then sells access (through LexisNexis) to insurance companies, which will then jack up your rates.

The absolute bare minimum you could could expect from the auto industry here is that they’re doing this in a way that’s clear to car owners. But of course they aren’t; they’re burying “consent” deep in the mire of some hundred-page end user agreement nobody reads, usually not related to the car purchase — but the apps consumers use to manage roadside assistance and other features.

Since Kashmir’s story was published, she says she’s been inundated with complaints by consumers about similar behavior. She’s even discovered that she’s one of the folks GM spied on and tattled to insurers about. In a follow up story, she recounts how she and her husband bought a Chevy Bolt, were auto-enrolled in a driver assistance program, then had their data (which they couldn’t access) sold to insurers.

GM’s now facing 10 different federal lawsuits from customers pissed off that they were surreptitiously tracked and then forced to pay significantly more for insurance:

“In 10 federal lawsuits filed in the last month, drivers from across the country say they did not knowingly sign up for Smart Driver but recently learned that G.M. had provided their driving data to LexisNexis. According to one of the complaints, a Florida owner of a 2019 Cadillac CTS-V who drove it around a racetrack for events saw his insurance premium nearly double, an increase of more than $5,000 per year.”

GM (and some apologists) will of course proclaim that this is only fair that reckless drivers pay more, but that’s generally not how it works. Pressured for unlimited quarterly returns, insurance companies will use absolutely anything they find in the data to justify rising rates.

[…]

Automakers — which have long had some of the worst privacy reputations in all of tech — are one of countless industries that lobbied relentlessly for decades to ensure Congress never passed a federal privacy law or regulated dodgy data brokers. And that the FTC — the over-burdened regulator tasked with privacy oversight — lacks the staff, resources, or legal authority to police the problem at any real scale.

The end result is just a parade of scandals. And if Hill were so inclined, she could write a similar story about every tech sector in America, given everything from your smart TV and electricity meter to refrigerator and kids’ toys now monitor your behavior and sell access to those insights to a wide range of dodgy data broker middlemen, all with nothing remotely close to ethics or competent oversight.

And despite the fact that this free for all environment is resulting in no limit of dangerous real-world harms, our Congress has been lobbied into gridlock by a cross-industry coalition of companies with near-unlimited budgets, all desperately hoping that their performative concerns about TikTok will distract everyone from the fact we live in a country too corrupt to pass a real privacy law.

Source: People Are Slowly Realizing Their Auto Insurance Rates Are Skyrocketing Because Their Car Is Covertly Spying On Them | Techdirt

Ring Spy Doorbell customers get measly $5.6 million in refunds in privacy settlement

In a 2023 complaint, the FTC accused the doorbell camera and home security provider of allowing its employees and contractors to access customers’ private videos. Ring allegedly used such footage to train algorithms without consent, among other purposes.

Ring was also charged with failing to implement key security protections, which enabled hackers to take control of customers’ accounts, cameras and videos. This led to “egregious violations of users’ privacy,” the FTC noted.

The resulting settlement required Ring to delete content that was found to be unlawfully obtained, establish stronger security protections

[…]

the FTC is sending 117,044 PayPal payments to impacted consumers who had certain types of Ring devices — including indoor cameras — during the timeframes that the regulators allege unauthorized access took place.

[…]

Earlier this year, the California-based company separately announced that it would stop allowing police departments to request doorbell camera footage from users, marking an end to a feature that had drawn criticism from privacy advocates.

Source: Ring customers get $5.6 million in refunds in privacy settlement | AP News

Considering the size of Ring and the size of the customer base, this is a very very light tap on the wrist for delivering poor security and something that spies on everything on the street.

When You Need To Post A Lengthy Legal Disclaimer With Your Parody Song, You Know Copyright Is Broken

In a world where copyright law has run amok, even creating a silly parody song now requires a massive legal disclaimer to avoid getting sued. That’s the absurd reality we live in, as highlighted by the brilliant musical parody project “There I Ruined It.”

Musician Dustin Ballard creates hilarious videos, some of which reimagine popular songs in the style of wildly different artists, like Simon & Garfunkel singing “Baby Got Back” or the Beach Boys covering Jay-Z’s “99 Problems.” He appears to create the music himself, including singing the vocals, but uses an AI tool to adjust the vocal styles to match the artist he’s trying to parody. The results are comedic gold. However, Ballard felt the need to plaster his latest video with paragraphs of dense legalese just to avoid frivolous copyright strikes.

When our intellectual property system is so broken that it stifles obvious works of parody and creative expression, something has gone very wrong. Comedy and commentary are core parts of free speech, but overzealous copyright law is allowing corporations to censor first and ask questions later. And that’s no laughing matter.

If you haven’t yet watched the video above (and I promise you, it is totally worth it to watch), the last 15 seconds involve this long scrolling copyright disclaimer. It is apparently targeted at the likely mythical YouTube employee who might read it in assessing whether or not the song is protected speech under fair use.

Image

And here’s a transcript:

The preceding was a work of parody which comments on the perceived misogynistic lyrical similarities between artists of two different eras: the Beach Boys and Jay-Z (Shawn Corey Carter). In the United States, parody is protected by the First Amendment under the Fair Use exception, which is governed by the factors enumerated in section 107 of the Copyright Act. This doctrine provides an affirmative defense for unauthorized uses that would otherwise amount to copyright infringement. Parody aside, copyrights generally expire 95 years after publication, so if you are reading this in the 22nd century, please disregard.

Anyhoo, in the unlikely event that an actual YouTube employee sees this, I’d be happy to sit down over coffee and talk about parody law. In Campell v. Acuff-Rose Music Inc, for example, the U.S. Supreme Court allowed for 2 Live Crew to borrow from Roy Orbison’s “Pretty Woman” on grounds of parody. I would have loved to be a fly on the wall when the justices reviewed those filthy lyrics! All this to say, please spare me the trouble of attempting to dispute yet another frivolous copyright claim from my old pals at Universal Music Group, who continue to collect the majority of this channel’s revenue. You’re ruining parody for everyone.

In 2024, you shouldn’t need to have a law degree to post a humorous parody song.

But, that is the way of the world today. The combination of the DMCA’s “take this down or else” and YouTube’s willingness to cater to big entertainment companies with the way ContentID works allows bogus copyright claims to have a real impact in all sorts of awful ways.

We’ve said it before: copyright remains the one tool that allows for the censorship of content, but it’s supposed to only be applied to situations of actual infringement. But because Congress and the courts have decided that copyright is in some sort of weird First Amendment free zone, it allows for the removal of content before there is any adjudication of whether or not the content is actually infringing.

And that has been a real loss to culture. There’s a reason we have fair use. There’s a reason we allow people to create parodies. It’s because it adds to and improves our cultural heritage. The video above (assuming it’s still available) is an astoundingly wonderful cultural artifact. But it’s one that is greatly at risk due to abusive copyright claims.

Nope, it has been taken down by Universal Music Group

Let’s also take this one step further. Tennessee just recently passed a new law, the ELVIS Act (Ensuring Likeness Voice and Image Security Act). This law expands the already problematic space of publicity rights based on a nonsense moral panic about AI and deepfakes. Because there’s an irrational (and mostly silly) fear of people taking the voice and likeness of musicians, this law broadly outlaws that.

While the ELVIS Act has an exemption for works deemed to be “fair use,” as with the rest of the discussion above, copyright law today seems to (incorrectly, in my opinion) take a “guilty until proven innocent” approach to copyright and fair use. That is, everything is set up to assume it’s infringing unless you can convince a court that it’s fair use, and that leads to all sorts of censorship.

[…]

Source: When You Need To Post A Lengthy Legal Disclaimer With Your Parody Song, You Know Copyright Is Broken | Techdirt

Europol asks tech firms, governments to unencrypt your private messages

In a joint declaration of European police chiefs published over the weekend, Europol said it needs lawful access to private messages, and said tech companies need to be able to scan them (ostensibly impossible with E2EE implemented) to protect users. Without such access, cops fear they won’t be able to prevent “the most heinous of crimes” like terrorism, human trafficking, child sexual abuse material (CSAM), murder, drug smuggling and other crimes.

“Our societies have not previously tolerated spaces that are beyond the reach of law enforcement, where criminals can communicate safely and child abuse can flourish,” the declaration said. “They should not now.”

Not exactly true – most EU countries do not tolerate anyone opening your private (snail) mail without a warrant.

The joint statement, which was agreed to in cooperation with the UK’s National Crime Agency, isn’t exactly making a novel claim. It’s nearly the same line of reasoning that the Virtual Global Taskforce, an international law enforcement group founded in 2003 to combat CSAM online, made last year when Meta first first started talking about implementing E2EE on Messenger and Instagram.

While not named in this latest declaration itself [PDF], Europol said that its opposition to E2EE “comes as end-to-end encryption has started to be rolled out across Meta’s messenger platform.” The UK NCA made a similar statement in its comments on the Europol missive released over the weekend.

The declaration urges the tech industry not to see user privacy as a binary choice, but rather as something that can be assured without depriving law enforcement of access to private communications.

Not really though. And if law enforcement can get at it, then so can everyone else.

[…] Gail Kent, Meta’s global policy director for Messenger, said in December the E2EE debate is far more complicated than the child safety issue that law enforcement makes it out to be, and leaving an encryption back door in products for police to take advantage of would only hamper trust in its messaging products.

Kent said Meta’s E2EE implementation prevents client-side scanning of content, which has been one of the biggest complaints from law enforcement. Kent said even that technology would violate user trust, as it serves as a workaround to intrude on user privacy without compromising encryption – an approach Meta is unwilling to take, according to Kent’s blog post.

As was pointed out during previous attempts to undermine E2EE, not only would an encryption back door (client-side scanning or otherwise) provide an inroad for criminals to access secured information, it wouldn’t stop criminals from finding some other way to send illicit content without the prying eyes of law enforcement able to take a look.

[…]

“We don’t think people want us reading their private messages, so have developed safety measures that prevent, detect and allow us to take action against this heinous abuse, while maintaining online privacy and security,” a Meta spokesperson told us last year. “It’s misleading and inaccurate to say that encryption would have prevented us from identifying and reporting accounts … to the authorities.”

In other words, don’t expect Meta to cave on this one when it can develop a fancy new detection algorithm instead.

Source: Europol asks tech firms, governments to get rid of E2EE • The Register

And every time they come for your freedom whilst quoting child safety – look out.

EDPS warns of EU plans to spy on personal chat messages

This week, during the presentation of the 2023 annual review ( pdf ) , the European privacy supervisor EDPS again warned about European plans to monitor chat messages from European citizens. According to the watchdog, this leads to ‘irreversible surveillance’.

At the beginning of 2022, the European Commission came up with a proposal to inspect all chat messages and other communications from citizens for child abuse. In the case of end-to-end encrypted chat services, this should be done via client-side scanning.

The European Parliament voted against the proposal, but came up with its own proposal.

However, the European member states have not yet taken a joint position.

Already in 2022, the EDPS raised the alarm about the European Commission’s proposal to monitor citizens’ communications. It is seen as a serious risk to the fundamental rights of 450 million Europeans.

Source: EDPS warns of European plans to monitor chat messages – Emerce

Sure, so the EU is not much of a democracy with the European Council (which is where the actual power is) not being elected at all, but that doesn’t mean it has to be a surveillance police state.

US Hospital Websites Almost All Give your Data to 3rd parties, but Many just don’t tell you about it

 In this cross-sectional analysis of a nationally representative sample of 100 nonfederal acute care hospitals, 96.0% of hospital websites transmitted user information to third parties, whereas 71.0% of websites included a publicly accessible privacy policy. Of 71 privacy policies, 40 (56.3%) disclosed specific third-party companies receiving user information.

[…]

Of 100 hospital websites, 96 […] transferred user information to third parties. Privacy policies were found on 71 websites […] 70 […] addressed how collected information would be used, 66 […] addressed categories of third-party recipients of user information, and 40 […] named specific third-party companies or services receiving user information.

[…]

In this cross-sectional study of a nationally representative sample of 100 nonfederal acute care hospitals, we found that although 96.0% of hospital websites exposed users to third-party tracking, only 71.0% of websites had an available website privacy policy. Polices averaged more than 2500 words in length and were written at a college reading-level. Given estimates that more than one-half of adults in the US lack literacy proficiency and that the average patient in the US reads at a grade 8 level, the length and complexity of privacy policies likely pose substantial barriers to users’ ability to read and understand them.27,32

[…]

Only 56.3% of policies (and only 40 hospitals overall) identified specific third-party recipients. Named third-parties tended to be companies familiar to users, such as Google. This lack of detail regarding third-party data recipients may lead users to assume that they are being tracked only by a small number of companies that they know well, when, in fact, hospital websites included in this study transferred user data to a median of 9 domains.

[…]

In addition to presenting risks for users, inadequate privacy policies may pose risks for hospitals. Although hospitals are generally not required under federal law to have a website privacy policy that discloses their methods of collecting and transferring data from website visitors, hospitals that do publish website privacy policies may be subject to enforcement by regulatory authorities like the Federal Trade Commission (FTC).33 The FTC has taken the position that entities that publish privacy policies must ensure that these policies reflect their actual practices.34 For example, entities that promise they will delete personal information upon request but fail to do so in practice may be in violation of the FTC Act.34

[…]

Source: User Information Sharing and Hospital Website Privacy Policies | Ethics | JAMA Network Open | JAMA Network

How private equity has used copyright to cannibalise the past at the expense of the future

Walled Culture has been warning about the financialisation and securitisation of music for two years now. Those obscure but important developments mean that the owners of copyrights are increasingly detached from the creative production process. They regard music as just another asset, like gold, petroleum or property, to be exploited to the maximum. A Guest Essay in the New York Times points out one of the many bad consequences of this trend:

Does that song on your phone or on the radio or in the movie theater sound familiar? Private equity — the industry responsible for bankrupting companies, slashing jobs and raising the mortality rates at the nursing homes it acquires — is making money by gobbling up the rights to old hits and pumping them back into our present. The result is a markedly blander music scene, as financiers cannibalize the past at the expense of the future and make it even harder for us to build those new artists whose contributions will enrich our entire culture.

As well as impoverishing our culture, the financialisation and securitisation of music is making life even harder for the musicians it depends on:

In the 1990s, as the musician and indie label founder Jenny Toomey wrote recently in Fast Company, a band could sell 10,000 copies of an album and bring in about $50,000 in revenue. To earn the same amount in 2024, the band’s whole album would need to rack up a million streams — roughly enough to put each song among Spotify’s top 1 percent of tracks. The music industry’s revenues recently hit a new high, with major labels raking in record earnings, while the streaming platforms’ models mean that the fractions of pennies that trickle through to artists are skewed toward megastars.

Part of the problem is the extremely low rates paid by streaming services. But the larger issue is the power imbalance within all the industries based on copyright. The people who actually create books, music, films and the rest are forced to accept bad deals with the distribution companies. Walled Culture the book (free ebook versions) details the painfully low income the vast majority of artists derive from their creativity, and how most are forced to take side jobs to survive. This daily struggle is so widespread that it is no longer remarked upon. It is one of the copyright world’s greatest successes that the public and many creators now regard this state of affairs as a sad but unavoidable fact of life. It isn’t.

The New York Times opinion piece points out that there are signs private equity is already moving on to its next market/victim, having made its killing in the music industry. But one thing is for sure. New ways of financing today’s exploited artists are needed, and not ones cooked up by Wall Street. Until musicians and creators in general take back control of their works, rather than acquiescing in the hugely unfair deal that is copyright, it will always be someone else who makes most of the money from their unique gifts.

Source: How private equity has used copyright to cannibalise the past at the expense of the future – Walled Culture

Of course, the whole model of continously making money from a single creating is a bit fucked up. If a businessman were to ask for money every time someone read their email that would be plain stupid. How is this any different?

Dutch investigation into Android smartphones leads to new lawsuit against Google Play Services Constant Surveillance

The Mass Damage & Consumer Foundation today announced that it has initiated a class action lawsuit against Google over its Android operating system. The reason is a new study that shows how Dutch Android smartphones systematically transfer large amounts of information about device use to Google. Even with the most privacy-friendly options enabled, user data cannot be prevented from ending up on Google’s servers. According to the foundation, this is not clear to Android users, let alone whether they have given permission for this.

For the research, a team of scientists purchased several Android phones between 2022 and 2024 and captured, decrypted and analyzed the outgoing traffic on a Dutch server. This shows that a bundle of processes called ‘Google Play Services’ runs silently in the background and cannot be disabled or deleted. These processes continuously record what happens on and around the phone. For example, Google shares which apps someone uses, products they order and even whether users are sleeping.

More than nine million Dutch people

The Mass Damage & Consumer Foundation states that Google’s conduct violates a large number of Dutch and European rules that must protect consumers. The foundation wants to use a lawsuit to force Google to implement fundamental (privacy) changes to the Android platform and to offer an opt-out option for every form of data it collects, not just a few.

[…]

Identity can be easily traced

The research paid specific attention to the use of unique identifiers (UIDs). These are characteristics that Google can link to the collected data, such as an e-mail address or Android ID, a unique serial number with which someone is known to Google. The use of these features is sensitive. For example, Google advises against the use of unique features in its own guidelines for app developers: users could unintentionally be tracked across multiple apps. However, one or more of these unique features were found in the data transmissions examined – without exception. The researchers point out that this makes it easy to trace someone’s identity to virtually everything that happens on and around an Android device.

[…]

Source: Dutch investigation into Android smartphones leads to new lawsuit against Google – Mass Damage & Consumer Foundation

OpenAI and Google train AIs on transcriptions of YouTube videos – YouTube and NYTimes desperately try to profit somehow without doing anything except lawsuit

OpenAI and Google trained their AI models on text transcribed from YouTube videos, potentially violating creators’ copyrights, according to The New York Times.

Note – the New York Times is embroiled in copyright lawsuits over AI, where they clearly show they don’t understand that an AI reading content is the same as a person reading content; that content being offered up for free with no paywall is free for everyone and that entering content and then asking for it back doesn’t mean that copyright is infringed.

[…]

It comes just days after YouTube CEO Neal Mohan said in an interview with Bloomberg Originals that OpenAI’s alleged use of YouTube videos to train its new text-to-video generator, Sora, would go against the platform’s policies.

According to the NYT, OpenAI used its Whisper speech recognition tool to transcribe more than one million hours of YouTube videos, which were then used to train GPT-4. The Information previously reported that OpenAI had used YouTube videos and podcasts to train the two AI systems. OpenAI president Greg Brockman was reportedly among the people on this team. Per Google’s rules, “unauthorized scraping or downloading of YouTube content” is not allowed

[…]

The way the data is stored in an ML model means that the data is not scraped or downloaded – unless you consider every view downloading or scraping though.

What this shows is a determination to ride the AI hype and find a way to monetise content that has already been released into the public domain without any extra effort apart from hiring a bunch of lawyers. The players are big and the payoff is potentially huge in terms of cash, but in terms of setting back progress, throwing everything under the copyright bus is a staggering disaster.

Source: OpenAI and Google reportedly used transcriptions of YouTube videos to train their AI models

Academics Try to Figure Out Apple’s default apps Privacy Settings and Fail

A study has concluded that Apple’s privacy practices aren’t particularly effective, because default apps on the iPhone and Mac have limited privacy settings and confusing configuration options.

The research was conducted by Amel Bourdoucen and Janne Lindqvist of Aalto University in Finland. The pair noted that while many studies had examined privacy issues with third-party apps for Apple devices, very little literature investigates the issue in first-party apps – like Safari and Siri.

The aims of the study [PDF] were to investigate how much data Apple’s own apps collect and where it’s sent, and to see if users could figure out how to navigate the landscape of Apple’s privacy settings.

[…]

“Our work shows that users may disable default apps, only to discover later that the settings do not match their initial preference,” the paper states.

“Our results demonstrate users are not correctly able to configure the desired privacy settings of default apps. In addition, we discovered that some default app configurations can even reduce trust in family relationships.”

The researchers criticize data collection by Apple apps like Safari and Siri, where that data is sent, how users can (and can’t) disable that data tracking, and how Apple presents privacy options to users.

The paper illustrates these issues in a discussion of Apple’s Siri voice assistant. While users can ostensibly choose not to enable Siri in the initial setup on macOS-powered devices, it still collects data from other apps to provide suggestions. To fully disable Siri, Apple users must find privacy-related options across five different submenus in the Settings app.

Apple’s own documentation for how its privacy settings work isn’t good either. It doesn’t mention every privacy option, explain what is done with user data, or highlight whether settings are enabled or disabled. Also, it’s written in legalese, which almost guarantees no normal user will ever read it.

[…]

The authors also conducted a survey of Apple users and quizzed them on whether they really understood how privacy options worked on iOS and macOS, and what apps were doing with their data.

While the survey was very small – it covered just 15 respondents – the results indicated that Apple’s privacy settings could be hard to navigate.

Eleven of the surveyed users were well aware about data tracking and that it was mostly on by default. However, when informed about how privacy options work in iOS and macOS, nine of the surveyed users were surprised about the scope of data collection.

[…]

Users were also tested on their knowledge of privacy settings for eight default apps – including Siri, Family Sharing, Safari, and iMessage. According to the study, none could confidently figure out how to work their way around the Settings menu to completely disable default apps. When confused, users relied on searching the internet for answers, rather than Apple’s privacy documentation.

[…]

Assuming Apple has any interest in fixing these shortcomings, the team made a few suggestions. Since many users first went to operating system settings instead of app-specific settings when attempting to disable data tracking, a change could assist users. Centralizing these options would also prevent users from getting frustrated and giving up on finding the settings they’re looking for.

Informing users what specific settings do would also be an improvement – many settings are labelled with just a name, but no further details. The researchers suggest replacing Apple’s jargon-filled privacy policy with descriptions that are in the settings menu itself, and maybe even providing some infographic illustrations as well. Anything would be better than legalese.

While this study probably won’t convince Apple to change its ways, lawsuits might have better luck. Apple has been sued multiple times for not transparently disclosing its data tracking. One of the latest suits calls out Apple’s broken promises about privacy, claiming that “Apple does not honor users’ requests to restrict data sharing.”

[…]

Reminder: Apple has a multi-billion-dollar online ads business that it built while strongly criticizing Facebook and others for their privacy practices.

Source: Academics reckon Apple’s default apps have privacy pitfalls • The Register

Roku’s New Idea to Show You Ads When You Pause Your Video Game and spy on the content on your hdmi cable Is Horrifying

[…]

Roku describes its idea in a patent application, which largely flew under the radar when it was filed in November, and was recently spotted by the streaming newsletter Lowpass. In the application, Roku describes a system that’s able to detect when users pause third-party hardware and software and show them ads during that time.

According to the company, its new system works via an HDMI connection. This suggests that it’s designed to target users who play video games or watch content from other streaming services on their Roku TVs. Lowpass described Roku’s conundrum perfectly:

“Roku’s ability to monetize moments when the TV is on but not actively being used goes away when consumers switch to an external device, be it a game console or an attached streaming adapter from a competing manufacturer,” Janko Roettgers, the newsletter’s author, wrote. “Effectively, HDMI inputs have been a bit of a black box for Roku.”

In addition, Roku wouldn’t just show you any old ads. The company states that its innovation can recognize the content that users have paused and deliver customized related ads. Roku’s system would do this by using audio or video-recognition technologies to analyze what the user is watching or analyze the content’s metadata, among other methods.

[…]

In the case of gaming, there’s also the danger of Roku mistaking a long moment of pondering for a pause and sticking an ad right when you’re getting ready to face the final boss. The company is aware of this potential failure and points out that its system will monitor the frames of the content being watched to ensure there was a phase. It also plans on using other methods, such as analyzing the audio feed on the TV for extended moments of silence, to confirm there has been a pause.

[…]

Source: Roku’s New Idea to Show You Ads When You Pause Your Video Game Is Horrifying

Google will delete data collected from private browsing

In hopes of settling a lawsuit challenging its data collection practices, Google has agreed to destroy web browsing data it collected from users browsing in Chrome’s private modes – which weren’t as private as you might have thought.

The lawsuit [PDF], filed in June, 2020, on behalf of plaintiffs Chasom Brown, Maria Nguyen, and William Byatt, sought to hold Google accountable for making misleading statements about privacy.

[…]

“Despite its representations that users are in control of what information Google will track and collect, Google’s various tracking tools, including Google Analytics and Google Ad Manager, are actually designed to automatically track users when they visit webpages – no matter what settings a user chooses,” the complaint claims. “This is true even when a user browses in ‘private browsing mode.'”

Chrome’s Incognito mode only provides privacy in the client by not keeping a locally stored record of the user’s browsing history. It does not shield website visits from Google.

[…]

During the discovery period from September 2020 through March 2022, Google produced more than 5.8 million pages of documents. Even so, it was sanctioned nearly $1 million in 2022 by Magistrate Judge Susan van Keulen – for concealing details about how it can detect when Chrome users employ Incognito mode.

What the plaintiffs’ legal team found might have been difficult to explain at trial.

“Google employees described Chrome Incognito Mode as ‘misleading,’ ‘effectively a lie,’ a ‘confusing mess,’ a ‘problem of professional ethics and basic honesty,’ and as being ‘bad for users, bad for human rights, bad for democracy,'” according to the declaration [PDF] of Mark C Mao, a partner with the law firm of Boies Schiller Flexner LLP, which represents the plaintiffs.

[…]

On December 26 last year the plaintiffs and Google agreed to settle the case. The plaintiffs’ attorneys have suggested the relief provided by the settlement is worth $5 billion – but nothing will be paid, yet.

The settlement covers two classes of people: one of which excludes those using Incognito mode while logged into their Google Account:

  • Class 1: All Chrome browser users with a Google account who accessed a non-Google website containing Google tracking or advertising code using such browser and who were (a) in “Incognito mode” on that browser and (b) were not logged into their Google account on that browser, but whose communications, including identifying information and online browsing history, Google nevertheless intercepted, received, or collected from June 1, 2016 through the present.
  • Class 2: All Safari, Edge, and Internet Explorer users with a Google account who accessed a non-Google website containing Google tracking or advertising code using such browser and who were (a) in a “private browsing mode” on that browser and (b) were not logged into their Google account on that browser, but whose communications, including identifying information and online browsing history, Google nevertheless intercepted, received, or collected from June 1, 2016 through the present.

The settlement [PDF] requires that Google: inform users that it collects private browsing data, both in its Privacy Policy and in an Incognito Splash Screen; “must delete and/or remediate billions of data records that reflect class members’ private browsing activities”; block third-party cookies in Incognito mode for the next five years (separately, Google is phasing out third-party cookies this year); and must delete the browser signals that indicate when private browsing mode is active, to prevent future tracking.

[…]

The class of affected people has been estimated to number about 136 million.

 

Source: Google will delete data collected from private browsing • The Register

The Digital Identity Wallet approved by parliament and council

On the 28th February, The European Parliament gave its final approval to the Digital Identity Regulation, with 335 votes to 190, with 31 abstentions. It was adopted by the EU Council of Ministers on 26th of March. The next step will be its publication in the Official Journal and its entry into force 20 days later.

The regulation introduces the EU Digital Identity Wallet, which will allow citizens to identify and authenticate themselves online to a range of public and private services, as well as store and share digital documents. Wallet users will also be able to create free digital signatures.

The EU Digital Identity Wallet will be used on a voluntary basis, and no one can be discriminated against for not using the wallet. The wallet will be open-source, to further encourage transparency, innovation, and enhance security.

Find out more about the history of the regulation and the project here.

Open-source code and new version of the ARF released for public feedback.

The open-source code of the EU Digital Identity Wallet, and the latest version of the Architecture and Reference Framework (ARF) are now available on our Github.

Version 1.3 of the ARF is now available to the public, to gather feedback before its adoption by the expert group. The ARF outlines how wallets distributed by Member States will function and contains a high level overview of the standards and practices that are needed to build the wallet.

The open-source code of the wallet (also referred to as the reference implementation) is built on the specifications outlined in the ARF. It is based on a modular architecture composed of a set of business agnostic, reusable components which will evolve in incremental steps and can be reused across multiple projects.

[…]

Large Scale Pilot projects are currently test driving the many use cases of the EU Digital Identity Wallet in the real world.

Discover the Large Scale Pilots

Source: The Digital Identity Wallet is now on its way – EU Digital Identity Wallet –

This is an immensely complex project which is very very important to get right. I am very curious if they did.

Soofa Digital Kiosks Snatch Your Phone’s Data When You Walk By, sell it on

Digital kiosks from Soofa seem harmless, giving you bits of information alongside some ads. However, these kiosks popping up throughout the United States take your phone’s information and location data whenever you walk near them, and sell them to local governments and advertisers, first reported by NBC Boston Monday.

“At Soofa, we developed the first pedestrian impressions sensor that measures accurate foot traffic in real-time,” says a page on the company’s website. “Soofa advertisers can check their analytics dashboard anytime to see how their campaigns are tracking towards impressions goals.”

While data tracking is commonplace online, it’s becoming more pervasive in the real world. Whenever you walk past a Soofa kiosk, it collects your phone’s unique identifier (MAC address), manufacturer, and signal strength. This allows it to track anyone who walks within a certain, unspecified range. It then creates a dashboard to share with advertisers and local governments to display analytics about how many people are walking and engaging with its billboards.

This can offer local cities new ways to understand how people use public spaces, and how many people are reading notices posted on these digital kiosks. However, it also gives local governments detailed information on how people move throughout society and raises a question of how this data is being used.

[…]

A Soofa spokesperson said it does not share data with any 3rd parties in an email to Gizmodo, and it only offers the dashboard to an organization that bought the kiosk. The company also claims to anonymize your MAC address by the time it gets to advertisers and local governments.

However, Soofa also tells advertisers how to effectively use your location data on its website. It notes that advertisers can track when you’ve been near a physical billboard or kiosk in the real world based on location data. Then, using cookies, the advertisers can send you more digital ads later on. While Soofa didn’t invent this technique, it certainly seems to be promoting it.

[…]

Source: These Digital Kiosks Snatch Your Phone’s Data When You Walk By

Mass claim CUIC against virus scanner (but really tracking sypware) Avast

Privacy First has teamed up with Austrian NOYB (the organisation of privacy activist Max Schrems) to form the new mass claim organisation CUIC founded. CUIC stands for Consumers United in Court, also pronounceable as ‘CU in Court’ (see you in court).

[…]

Millions spied on by virus scanner

CUIC today filed subpoenas against software company Avast that made virus scanners that illegally collected the browsing behaviour of millions of people on computer, tablet or phone, including in the Netherlands. This data was then resold to other companies through an Avast subsidiary for millions of euros. This included data about users’ health, locations visited, political affiliation, religious beliefs, sexual orientation or economic situation. This information was linked to each specific user through unique user IDs. In a press release articulates CUIC president Wilmar Hendriks today as follows: “People thought they were safe with a virus scanner, but its very creator tracked everything they did on their computers. Avast sold this information to third parties for big money. They even advertised the goldmine of data they had captured. Companies like Avast should not be allowed to get away with this. That is why we are bringing this lawsuit. Those who won’t hear should feel.”

Fines

Back in March 2023, the Czech privacy regulator (UOOU) concluded that Avast violated the AVG and fined the company approximately €13.7 million. The US federal consumer authority, the Federal Trade Commission (FTC), also recently ordered Avast to pay USD16.5 million in compensation to users and ordered it to stop selling or making collected data available to third parties, delete that collected data and implement a comprehensive privacy programme.

The lawsuit for which CUIC today sued Avast should lead to compensation for users in the Netherlands

[…]

Source: Mass claim CUIC against virus scanner Avast launched – Privacy First

Amazon fined almost $8M in Poland over dark patterns

Poland’s competition and consumer protection watchdog has fined Amazon’s European subsidiary around $8 million (31.9 million Zlotys) for “dark patterns” that messed around internet shoppers.

The preliminary ruling applies to Amazon EU SARL, which oversees Amazon’s Polish e-commerce site, Amazon.pl, out of Luxembourg. Poland’s Office of Competition and Consumer Protection said the decision, subject to appeal, reflected misleading practices related to product availability, delivery dates, and drop-off time guarantees.

According to the ruling, Amazon’s Polish operation repeatedly canceled customer orders for e-book readers and other gear. The online souk believed it was within its rights to do so because it considers its sales contract and delivery obligations are active only after an item has shipped, rather than when the customer purchases it.

But these abrupt cancellations left punters who thought they’d successfully paid for stuff and were awaiting delivery disappointed, sparking complaints to the watchdog, which has seemingly upheld the claims.

Not only that, the regulator was unimpressed that the language on Amazon’s website warning this could happen is difficult to read – “it is written in gray font on a white background, at the very bottom of the page.”

[…]

Source: Amazon fined almost $8M in Poland over ‘dark patterns’ • The Register

Age Verification Laws Drag Us Back to the Dark Ages of the Internet

The fundamental flaw with the age verification bills and laws passing rapidly across the country is the delusional, unfounded belief that putting hurdles between people and pornography is going to actually prevent them from viewing porn. What will happen, and is already happening, is that people–including minors–will go to unmoderated, actively harmful alternatives that don’t require handing over a government-issued ID to see people have sex. Meanwhile, performers and companies that are trying to do the right thing will suffer.

[…]

Source: Age Verification Laws Drag Us Back to the Dark Ages of the Internet

The legislators passing these bills are doing so under the guise of protecting children, but what’s actually happening is a widespread rewiring of the scaffolding of the internet. They ignore long-established legal precedent that has said for years that age verification is unconstitutional, eventually and inevitably reducing everything we see online without impossible privacy hurdles and compromises to that which is not “harmful to minors.” The people who live in these states, including the minors the law is allegedly trying to protect, are worse off because of it. So is the rest of the internet.
Yet new legislation is advancing in Kentucky and Nebraska, while the state of Kansas just passed a law which even requires age-verification for viewing “acts of homosexuality,” according to a report: Websites can be fined up to $10,000 for each instance a minor accesses their content, and parents are allowed to sue for damages of at least $50,000. This means that the state can “require age verification to access LGBTQ content,” according to attorney Alejandra Caraballo, who said on Threads that “Kansas residents may soon need their state IDs” to access material that simply “depicts LGBTQ people.”
One newspaper opinion piece argues there’s an easier solution: don’t buy your children a smartphone: Or we could purchase any of the various software packages that block social media and obscene content from their devices. Or we could allow them to use social media, but limit their screen time. Or we could educate them about the issues that social media causes and simply trust them to make good choices. All of these options would have been denied to us if we lived in a state that passed a strict age verification law. Not only do age verification laws reduce parental freedom, but they also create myriad privacy risks. Requiring platforms to collect government IDs and face scans opens the door to potential exploitation by hackers and enemy governments. The very information intended to protect children could end up in the wrong hands, compromising the privacy and security of millions of users…

Ultimately, age verification laws are a misguided attempt to address the complex issue of underage social media use. Instead of placing undue burdens on users and limiting parental liberty, lawmakers should look for alternative strategies that respect privacy rights while promoting online safety.
This week a trade association for the adult entertainment industry announced plans to petition America’s Supreme Court to intervene.

Source: Slashdot

This is one of the many problems caused by an America that is suddenly so very afraid of sex, death and politics.

Project Ghostbusters: Facebook Accused of Using Your Phone to Wiretap Snapchat, Youtube, Amazon through Onavo VPN

Court filings unsealed last week allege Meta created an internal effort to spy on Snapchat in a secret initiative called “Project Ghostbusters.” Meta did so through Onavo, a Virtual Private Network (VPN) service the company offered between 2016 and 2019 that, ultimately, wasn’t private at all.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted we have no analytics about them,” said Mark Zuckerberg in an email to three Facebook executives in 2016, unsealed in Meta’s antitrust case on Saturday. “It seems important to figure out a new way to get reliable analytics about them… You should figure out how to do this.”

Thus, Project Ghostbusters was born. It’s Meta’s in-house wiretapping tool to spy on data analytics from Snapchat starting in 2016, later used on YouTube and Amazon. This involved creating “kits” that can be installed on iOS and Android devices, to intercept traffic for certain apps, according to the filings. This was described as a “man-in-the-middle” approach to get data on Facebook’s rivals, but users of Onavo were the “men in the middle.”

[…]

A team of senior executives and roughly 41 lawyers worked on Project Ghostbusters, according to court filings. The group was heavily concerned with whether to continue the program in the face of press scrutiny. Facebook ultimately shut down Onavo in 2019 after Apple booted the VPN from its app store.

Prosecutors also allege that Facebook violated the United States Wiretap Act, which prohibits the intentional procurement of another person’s electronic communications.

[…]

Prosecutors allege Project Ghostbusters harmed competition in the ad industry, adding weight to their central argument that Meta is a monopoly in social media.

Source: Project Ghostbusters: Facebook Accused of Using Your Phone to Wiretap Snapchat

Who would have thought that a Facebook VPN was worthless? Oh, I have been reporting on this since 2018

General Motors Quits Sharing Driving Behavior With Data Brokers – Now sells it directly to insurance companies?

General Motors said Friday that it had stopped sharing details about how people drove its cars with two data brokers that created risk profiles for the insurance industry.

The decision followed a New York Times report this month that G.M. had, for years, been sharing data about drivers’ mileage, braking, acceleration and speed with the insurance industry. The drivers were enrolled — some unknowingly, they said — in OnStar Smart Driver, a feature in G.M.’s internet-connected cars that collected data about how the car had been driven and promised feedback and digital badges for good driving.

Some drivers said their insurance rates had increased as a result of the captured data, which G.M. shared with two brokers, LexisNexis Risk Solutions and Verisk. The firms then sold the data to insurance companies.

Since Wednesday, “OnStar Smart Driver customer data is no longer being shared with LexisNexis or Verisk,” a G.M. spokeswoman, Malorie Lucich, said in an emailed statement. “Customer trust is a priority for us, and we are actively evaluating our privacy processes and policies.”

Romeo Chicco, a Florida man whose insurance rates nearly doubled after his Cadillac collected his driving data, filed a complaint seeking class-action status against G.M., OnStar and LexisNexis this month.

An internal document, reviewed by The Times, showed that as of 2022, more than eight million vehicles were included in Smart Driver. An employee familiar with the program said the company’s annual revenue from Smart Driver was in the low millions of dollars.

Source: General Motors Quits Sharing Driving Behavior With Data Brokers – The New York Times

No mention of who it is now selling the data to.

VPN Demand Surges 234.8% After Adult Site Restriction on Texas-Based Users

VPN demand in Texas skyrocketed by 234.8% on March 15, 2024, after state authorities enacted a law requiring adult sites to verify users’ ages before granting them access to the websites’ content.

Texas’ age verification law was passed in June 2023 and was set to take effect in September of the same year. However, a day before its implementation, a US district judge temporarily blocked enforcement after a lawsuit filed by the Free Speech Coalition (FSC) deemed the policy unconstitutional per the First Amendment.

On March 14, 2024, the US Court of Appeals for the 5th Circuit decreed that Texas could proceed with the law’s enactment.

As a sign of protest, Pornhub, the most visited adult site in the US, blocked IP addresses from Texas — the eighth state to suffer such a ban after their respective governments enforced similar restrictions on adult sites.

[…]

Following the law’s enactment, users in Texas seem to be scrambling for means to access the affected adult sites. vpnMentor’s research team analyzed user demand data and found a 234.8% increase in VPN demand in the state.

The graph below shows the VPN demand in Texas from March 1 to March 16.

Past VPN Demand Growths from Adult Site Restrictions

Pornhub has previously blocked IP addresses from Louisiana, Mississippi, Arkansas, Utah, Virginia, North Carolina, and Montana — all of which have enforced age-verification laws that the adult site deemed unjust.

In May 2023, Pornhub’s banning of Utah-based users caused a 967% spike in VPN demand in the state. That same year, the passing of adult-site-related age restriction laws in Louisiana and Mississippi led to a 200% and 72% surge in VPN interest, respectively.

Source: VPN Demand Surges Post Adult Site Restriction on Texas-Based Users

Pornhub disables website in Texas after AG sues for not verifying users’ ages

Pornhub has disabled its site in Texas to object to a state law that requires the company to verify the age of users to prevent minors from accessing the site.

Texas residents who visit the site are met with a message from the company that criticizes the state’s elected officials who are requiring them to track the age of users.

The company said the newly passed law impinges on “the rights of adults to access protected speech” and fails to pass strict scrutiny by “employing the least effective and yet also most restrictive means of accomplishing Texas’s stated purpose of allegedly protecting minors.”

Pornhub said safety and compliance are “at the forefront” of the company’s mission, but having users provide identification every time they want to access the site is “not an effective solution for protecting users online.” The adult content website argues the restrictions instead will put minors and users’ privacy at risk.

[…]

The announcement from Pornhub follows the news that Texas Attorney General Ken Paxton (R) was suing Aylo, the pornography giant that owns Pornhub, for not following the newly enacted age verification law.

Paxton’s lawsuit is looking to have Aylo pay up to $1,600,000, from mid-September of last year to the date of the filing of the lawsuit and an additional $10,000 each day since filing.

[…]

Paxton released a statement on March 8, calling the ruling an “important victory.” The court ruled that the age verification requirement does not violate the First Amendment, Paxton said, saying he won in the fight against Pornhub and other pornography companies.

The state Legislature passed the age verification law last year, requiring companies that distribute sexual material that could be harmful to minors to confirm users to the platform are older than 18 years. The law asks users to provide government-issued identification or public or private data to verify they are of age to access the site.

 

Source: Pornhub disables website in Texas after AG sues for not verifying users’ ages | The Hill

Age verification is not only easily bypassed, but also extremely sensitive due to the nature of the documents you need to upload to the verification agency. Big centralised databases get hacked all the time and this one would be a massive target, also leaving people in it potentially open to blackmail, as they would be linked to a porn site – which for some reason Americans find problematic.