Mass claim CUIC against virus scanner (but really tracking sypware) Avast

Privacy First has teamed up with Austrian NOYB (the organisation of privacy activist Max Schrems) to form the new mass claim organisation CUIC founded. CUIC stands for Consumers United in Court, also pronounceable as ‘CU in Court’ (see you in court).

[…]

Millions spied on by virus scanner

CUIC today filed subpoenas against software company Avast that made virus scanners that illegally collected the browsing behaviour of millions of people on computer, tablet or phone, including in the Netherlands. This data was then resold to other companies through an Avast subsidiary for millions of euros. This included data about users’ health, locations visited, political affiliation, religious beliefs, sexual orientation or economic situation. This information was linked to each specific user through unique user IDs. In a press release articulates CUIC president Wilmar Hendriks today as follows: “People thought they were safe with a virus scanner, but its very creator tracked everything they did on their computers. Avast sold this information to third parties for big money. They even advertised the goldmine of data they had captured. Companies like Avast should not be allowed to get away with this. That is why we are bringing this lawsuit. Those who won’t hear should feel.”

Fines

Back in March 2023, the Czech privacy regulator (UOOU) concluded that Avast violated the AVG and fined the company approximately €13.7 million. The US federal consumer authority, the Federal Trade Commission (FTC), also recently ordered Avast to pay USD16.5 million in compensation to users and ordered it to stop selling or making collected data available to third parties, delete that collected data and implement a comprehensive privacy programme.

The lawsuit for which CUIC today sued Avast should lead to compensation for users in the Netherlands

[…]

Source: Mass claim CUIC against virus scanner Avast launched – Privacy First

Amazon fined almost $8M in Poland over dark patterns

Poland’s competition and consumer protection watchdog has fined Amazon’s European subsidiary around $8 million (31.9 million Zlotys) for “dark patterns” that messed around internet shoppers.

The preliminary ruling applies to Amazon EU SARL, which oversees Amazon’s Polish e-commerce site, Amazon.pl, out of Luxembourg. Poland’s Office of Competition and Consumer Protection said the decision, subject to appeal, reflected misleading practices related to product availability, delivery dates, and drop-off time guarantees.

According to the ruling, Amazon’s Polish operation repeatedly canceled customer orders for e-book readers and other gear. The online souk believed it was within its rights to do so because it considers its sales contract and delivery obligations are active only after an item has shipped, rather than when the customer purchases it.

But these abrupt cancellations left punters who thought they’d successfully paid for stuff and were awaiting delivery disappointed, sparking complaints to the watchdog, which has seemingly upheld the claims.

Not only that, the regulator was unimpressed that the language on Amazon’s website warning this could happen is difficult to read – “it is written in gray font on a white background, at the very bottom of the page.”

[…]

Source: Amazon fined almost $8M in Poland over ‘dark patterns’ • The Register

Age Verification Laws Drag Us Back to the Dark Ages of the Internet

The fundamental flaw with the age verification bills and laws passing rapidly across the country is the delusional, unfounded belief that putting hurdles between people and pornography is going to actually prevent them from viewing porn. What will happen, and is already happening, is that people–including minors–will go to unmoderated, actively harmful alternatives that don’t require handing over a government-issued ID to see people have sex. Meanwhile, performers and companies that are trying to do the right thing will suffer.

[…]

Source: Age Verification Laws Drag Us Back to the Dark Ages of the Internet

The legislators passing these bills are doing so under the guise of protecting children, but what’s actually happening is a widespread rewiring of the scaffolding of the internet. They ignore long-established legal precedent that has said for years that age verification is unconstitutional, eventually and inevitably reducing everything we see online without impossible privacy hurdles and compromises to that which is not “harmful to minors.” The people who live in these states, including the minors the law is allegedly trying to protect, are worse off because of it. So is the rest of the internet.
Yet new legislation is advancing in Kentucky and Nebraska, while the state of Kansas just passed a law which even requires age-verification for viewing “acts of homosexuality,” according to a report: Websites can be fined up to $10,000 for each instance a minor accesses their content, and parents are allowed to sue for damages of at least $50,000. This means that the state can “require age verification to access LGBTQ content,” according to attorney Alejandra Caraballo, who said on Threads that “Kansas residents may soon need their state IDs” to access material that simply “depicts LGBTQ people.”
One newspaper opinion piece argues there’s an easier solution: don’t buy your children a smartphone: Or we could purchase any of the various software packages that block social media and obscene content from their devices. Or we could allow them to use social media, but limit their screen time. Or we could educate them about the issues that social media causes and simply trust them to make good choices. All of these options would have been denied to us if we lived in a state that passed a strict age verification law. Not only do age verification laws reduce parental freedom, but they also create myriad privacy risks. Requiring platforms to collect government IDs and face scans opens the door to potential exploitation by hackers and enemy governments. The very information intended to protect children could end up in the wrong hands, compromising the privacy and security of millions of users…

Ultimately, age verification laws are a misguided attempt to address the complex issue of underage social media use. Instead of placing undue burdens on users and limiting parental liberty, lawmakers should look for alternative strategies that respect privacy rights while promoting online safety.
This week a trade association for the adult entertainment industry announced plans to petition America’s Supreme Court to intervene.

Source: Slashdot

This is one of the many problems caused by an America that is suddenly so very afraid of sex, death and politics.

Project Ghostbusters: Facebook Accused of Using Your Phone to Wiretap Snapchat, Youtube, Amazon through Onavo VPN

Court filings unsealed last week allege Meta created an internal effort to spy on Snapchat in a secret initiative called “Project Ghostbusters.” Meta did so through Onavo, a Virtual Private Network (VPN) service the company offered between 2016 and 2019 that, ultimately, wasn’t private at all.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted we have no analytics about them,” said Mark Zuckerberg in an email to three Facebook executives in 2016, unsealed in Meta’s antitrust case on Saturday. “It seems important to figure out a new way to get reliable analytics about them… You should figure out how to do this.”

Thus, Project Ghostbusters was born. It’s Meta’s in-house wiretapping tool to spy on data analytics from Snapchat starting in 2016, later used on YouTube and Amazon. This involved creating “kits” that can be installed on iOS and Android devices, to intercept traffic for certain apps, according to the filings. This was described as a “man-in-the-middle” approach to get data on Facebook’s rivals, but users of Onavo were the “men in the middle.”

[…]

A team of senior executives and roughly 41 lawyers worked on Project Ghostbusters, according to court filings. The group was heavily concerned with whether to continue the program in the face of press scrutiny. Facebook ultimately shut down Onavo in 2019 after Apple booted the VPN from its app store.

Prosecutors also allege that Facebook violated the United States Wiretap Act, which prohibits the intentional procurement of another person’s electronic communications.

[…]

Prosecutors allege Project Ghostbusters harmed competition in the ad industry, adding weight to their central argument that Meta is a monopoly in social media.

Source: Project Ghostbusters: Facebook Accused of Using Your Phone to Wiretap Snapchat

Who would have thought that a Facebook VPN was worthless? Oh, I have been reporting on this since 2018

General Motors Quits Sharing Driving Behavior With Data Brokers – Now sells it directly to insurance companies?

General Motors said Friday that it had stopped sharing details about how people drove its cars with two data brokers that created risk profiles for the insurance industry.

The decision followed a New York Times report this month that G.M. had, for years, been sharing data about drivers’ mileage, braking, acceleration and speed with the insurance industry. The drivers were enrolled — some unknowingly, they said — in OnStar Smart Driver, a feature in G.M.’s internet-connected cars that collected data about how the car had been driven and promised feedback and digital badges for good driving.

Some drivers said their insurance rates had increased as a result of the captured data, which G.M. shared with two brokers, LexisNexis Risk Solutions and Verisk. The firms then sold the data to insurance companies.

Since Wednesday, “OnStar Smart Driver customer data is no longer being shared with LexisNexis or Verisk,” a G.M. spokeswoman, Malorie Lucich, said in an emailed statement. “Customer trust is a priority for us, and we are actively evaluating our privacy processes and policies.”

Romeo Chicco, a Florida man whose insurance rates nearly doubled after his Cadillac collected his driving data, filed a complaint seeking class-action status against G.M., OnStar and LexisNexis this month.

An internal document, reviewed by The Times, showed that as of 2022, more than eight million vehicles were included in Smart Driver. An employee familiar with the program said the company’s annual revenue from Smart Driver was in the low millions of dollars.

Source: General Motors Quits Sharing Driving Behavior With Data Brokers – The New York Times

No mention of who it is now selling the data to.

VPN Demand Surges 234.8% After Adult Site Restriction on Texas-Based Users

VPN demand in Texas skyrocketed by 234.8% on March 15, 2024, after state authorities enacted a law requiring adult sites to verify users’ ages before granting them access to the websites’ content.

Texas’ age verification law was passed in June 2023 and was set to take effect in September of the same year. However, a day before its implementation, a US district judge temporarily blocked enforcement after a lawsuit filed by the Free Speech Coalition (FSC) deemed the policy unconstitutional per the First Amendment.

On March 14, 2024, the US Court of Appeals for the 5th Circuit decreed that Texas could proceed with the law’s enactment.

As a sign of protest, Pornhub, the most visited adult site in the US, blocked IP addresses from Texas — the eighth state to suffer such a ban after their respective governments enforced similar restrictions on adult sites.

[…]

Following the law’s enactment, users in Texas seem to be scrambling for means to access the affected adult sites. vpnMentor’s research team analyzed user demand data and found a 234.8% increase in VPN demand in the state.

The graph below shows the VPN demand in Texas from March 1 to March 16.

Past VPN Demand Growths from Adult Site Restrictions

Pornhub has previously blocked IP addresses from Louisiana, Mississippi, Arkansas, Utah, Virginia, North Carolina, and Montana — all of which have enforced age-verification laws that the adult site deemed unjust.

In May 2023, Pornhub’s banning of Utah-based users caused a 967% spike in VPN demand in the state. That same year, the passing of adult-site-related age restriction laws in Louisiana and Mississippi led to a 200% and 72% surge in VPN interest, respectively.

Source: VPN Demand Surges Post Adult Site Restriction on Texas-Based Users

Pornhub disables website in Texas after AG sues for not verifying users’ ages

Pornhub has disabled its site in Texas to object to a state law that requires the company to verify the age of users to prevent minors from accessing the site.

Texas residents who visit the site are met with a message from the company that criticizes the state’s elected officials who are requiring them to track the age of users.

The company said the newly passed law impinges on “the rights of adults to access protected speech” and fails to pass strict scrutiny by “employing the least effective and yet also most restrictive means of accomplishing Texas’s stated purpose of allegedly protecting minors.”

Pornhub said safety and compliance are “at the forefront” of the company’s mission, but having users provide identification every time they want to access the site is “not an effective solution for protecting users online.” The adult content website argues the restrictions instead will put minors and users’ privacy at risk.

[…]

The announcement from Pornhub follows the news that Texas Attorney General Ken Paxton (R) was suing Aylo, the pornography giant that owns Pornhub, for not following the newly enacted age verification law.

Paxton’s lawsuit is looking to have Aylo pay up to $1,600,000, from mid-September of last year to the date of the filing of the lawsuit and an additional $10,000 each day since filing.

[…]

Paxton released a statement on March 8, calling the ruling an “important victory.” The court ruled that the age verification requirement does not violate the First Amendment, Paxton said, saying he won in the fight against Pornhub and other pornography companies.

The state Legislature passed the age verification law last year, requiring companies that distribute sexual material that could be harmful to minors to confirm users to the platform are older than 18 years. The law asks users to provide government-issued identification or public or private data to verify they are of age to access the site.

 

Source: Pornhub disables website in Texas after AG sues for not verifying users’ ages | The Hill

Age verification is not only easily bypassed, but also extremely sensitive due to the nature of the documents you need to upload to the verification agency. Big centralised databases get hacked all the time and this one would be a massive target, also leaving people in it potentially open to blackmail, as they would be linked to a porn site – which for some reason Americans find problematic.

Microsoft is once again asking Chrome users to try Bing through unblockable pop-ups – wait, you are not using Firefox?

Microsoft has been pushing Bing pop-up ads in Chrome on Windows 10 and 11. Windows Latest and The Verge reported on Friday that the ad encourages Chrome users (in bold lettering) to use Bing instead of Google search. “Chat with GPT-4 for free on Chrome! Get hundreds of daily chat turns with Bing Al”, the ad reads. If you click “Yes,” the pop-up will install the “Bing Search” Chrome extension while making Microsoft’s search engine the default.

If you click “Yes” on the ad to switch to Bing, a Chrome pop-up will appear, asking you to confirm that you want to change the browser’s default search engine. “Did you mean to change your search provider?” the pop-up asks. “The ‘Microsoft Bing Search for Chrome’ extension changed search to use bing.com,’” Chrome’s warning states.

Directly beneath that alert, seemingly in anticipation of Chrome’s pop-up, another Windows notification warns, “Wait — don’t change it back! If you do, you’ll turn off Microsoft Bing Search for Chrome and lose access to Bing Al with GPT-4 and DALL-E 3. Select Keep it to stay with Microsoft Bing.”

Essentially, users are caught in a war of pop-ups between one company trying to pressure you into using its AI assistant / search engine and another trying to keep you on its default (which you probably wanted if you installed Chrome in the first place). Big Tech’s battles for AI and search supremacy are turning into obnoxious virtual shouting matches in front of users’ eyeballs as they try to browse the web.

There doesn’t appear to be an easy way to prevent the ad from appearing.

[…]

Source: Microsoft is once again asking Chrome users to try Bing through unblockable pop-ups

And just when you’d thought you’d lived through the first browser wars, there’s this and Apple’s browser tantrums as well! (Apple stamps feet but now to let EU developers distribute apps from the web, Apple reverses hissy fit decision to remove Home Screen web apps in EU, Shameless Insult, Malicious Compliance, Junk Fees, Extortion Regime: Industry Reacts To Apple’s Proposed Changes Over Digital Markets Act, Mozilla says Apple’s new browser rules are ‘as painful as possible’ for Firefox)

Makers of Switch emulator Yuzu crushed quickly by Nintendo

Tropic Haze, the popular Yuzu Nintendo Switch emulator developer, appears to have agreed to settle Nintendo’s lawsuit against it. Less than a week after Nintendo filed the legal action, accusing the emulator’s creators of “piracy at a colossal scale,” a joint final judgment and permanent injunction filed Tuesday says Tropic Haze has agreed to pay the Mario maker $2.4 million, along with a long list of concessions.

Nintendo’s lawsuit claimed Tropic Haze violated the anti-circumvention and anti-trafficking provisions of the Digital Millennium Copyright Act (DMCA). “Without Yuzu’s decryption of Nintendo’s encryption, unauthorized copies of games could not be played on PCs or Android devices,” the company wrote in its complaint. It described Yuzu as “software primarily designed to circumvent technological measures.”

Yuzu launched in 2018 as free, open-source software for Windows, Linux and Android. It could run countless copyrighted Switch games — including console sellers like The Legend of Zelda: Breath of the Wild and Tears of the Kingdom, Super Mario Odyssey and Super Mario Wonder. Reddit threads comparing Switch emulators praised Yuzu’s performance compared to rivals like Ryujinx. Yuzu introduces various bugs across different titles, but it can typically handle games at higher resolutions than the Switch, often with better frame rates, so long as your hardware is powerful enough.

Screenshot from the Yuzu emulator website showing a still from Zelda: Breath of the Wild with a blueprint-style sketch of the Nintendo Switch framing it. Dark gray background.
A screenshot from Yuzu’s website, showing The Legend of Zelda: Breath of the Wild (Tropic Haze / Nintendo)

As part of an Exhibit A attached to the proposed joint settlement, Tropic Haze agreed to a series of accommodations. In addition to paying Nintendo $2.4 million, it must permanently refrain from “engaging in activities related to offering, marketing, distributing, or trafficking in Yuzu emulator or any similar software that circumvents Nintendo’s technical protection measures.”

Tropic Haze must also delete all circumvention devices, tools and Nintendo cryptographic keys used in the emulator and turn over all circumvention devices and modified Nintendo hardware. It even has to surrender the emulator’s web domain (including any variants or successors) to Nintendo. (The website is still live now, perhaps waiting for the judgment’s final a-okay.) Not abiding by the settlement’s agreements could land Tropic Haze in contempt of court, including punitive, coercive and monetary actions.

Although piracy is the top motive for many emulator users, the software can double as crucial tools for video game preservation — making rapid legal surrenders like Tropic Haze’s potentially problematic. Without emulators, Nintendo and other copyright holders could make games obsolete for future generations as older hardware eventually becomes more difficult to find.

Nintendo’s legal team is, of course, no stranger to aggressively enforcing copyrighted material. In recent years, the company went after Switch piracy websites, sued ROM-sharing website RomUniverse for $2 million and helped send hacker Gary Bowser to prison. Although it was Valve’s doing, Nintendo’s reputation indirectly got the Dolphin Wii and GameCube emulator blocked from Steam. It’s safe to say the Mario maker doesn’t share preservationists’ views on the crucial historical role emulators can play.

Despite the settlement, it appears unlikely the open-source Yuzu will disappear entirely. The emulator is still available on GitHub, where its entire codebase can be found.

Source: Makers of Switch emulator Yuzu quickly settle with Nintendo for $2.4 million

Yup, the big money of Nintendo is excellent at destroying small guys. A really good reason to hate lawyers, but also see how broken the law is. For more, see: Nintendo files lawsuit against creators of Yuzu emulator

Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies

car with eye in sky

Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident. So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor. LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act. What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car. On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.

According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month. “It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.” In recent years, insurance companies have offered incentives to people who install dongles in their cars or download smartphone apps that monitor their driving, including how much they drive, how fast they take corners, how hard they hit the brakes and whether they speed. But “drivers are historically reluctant to participate in these programs,” as Ford Motor put it in apatent application (PDF) that describes what is happening instead: Car companies are collecting information directly from internet-connected vehicles for use by the insurance industry.

Sometimes this is happening with a driver’s awareness and consent. Car companies have established relationships with insurance companies, so that if drivers want to sign up for what’s called usage-based insurance — where rates are set based on monitoring of their driving habits — it’s easy to collect that data wirelessly from their cars. But in other instances, something much sneakier has happened. Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis. Automakers and data brokers that have partnered to collect detailed driving data from millions of Americans say they have drivers’ permission to do so. But the existence of these partnerships is nearly invisible to drivers, whose consent is obtained in fine print and murky privacy policies that few read. Especially troubling is that some drivers with vehicles made by G.M. say they were tracked even when they did not turn on the feature — called OnStar Smart Driver — and that their insurance rates went up as a result.

European Commission broke data protection law with Microsoft Office 365 – duh

The European Commission has been reprimanded for infringing data protection regulations when using Microsoft 365.

The rebuke came from the European Data Protection Supervisor (EDPS) and is the culmination of an investigation that kicked off in May 2021, following the Schrems II judgement.

According to the EDPS, the EC infringed several data protection regulations, including rules around transferring personal data outside the EU / European Economic Area (EEA.)

According to the organization, “In particular, the Commission has failed to provide appropriate safeguards to ensure that personal data transferred outside the EU/EEA are afforded an essentially equivalent level of protection as guaranteed in the EU/EEA.

“Furthermore, in its contract with Microsoft, the Commission did not sufficiently specify what types of personal data are to be collected and for which explicit and specified purposes when using Microsoft 365.”

While the concerns are more about EU institutions and transparency, they should also serve as notice to any company doing business in the EU / EEA to take a very close look at how it has configured Microsoft 365 regarding the EU Data Protection Regulations.

[…]

Source: European Commission broke data protection law with Microsoft • The Register

Who knew? An American Company running an American cloud product on American Servers and the EU was putting it’s data on it. Who would have thought that might end up in America?!

Black Trump Supporters Are Being AI-Generated – Trumpistas fall for them though

Donald Trump supporters are creating and sharing AI-generated images of the former president with Black voters. The photos appear to be an attempt to inflate Trump’s popularity with the Black community, which may be irreparably harmed by his ties to white supremacist groups, but the photos are nothing but fakes.

In the leadup to the 2024 Presidential Election, several of these AI-generated dupes of Black Trump supporters have popped up on social media. One image is a holiday photo depicting Trump embracing several Black people. However, it’s an AI dupe created by The Mark Kaye Show, a conservative talk show, and distributed on Facebook to over one million of Kaye’s Facebook followers. The post from November, first reported by the BBC, was not labeled as being AI-generated in any way.

“I never thought I would read the words ‘BLM Leader endorses Donald Trump,’ but then again, Christmas is the time for miracles,” said Kaye in a Facebook post.

The image is obviously an AI fake. Trump’s hands look deformed, and the person on the far left is missing a ring finger.

[….]

Source: Black Trump Supporters Are Being AI-Generated

You don’t own what you bought: Roku Issues a Mandatory Terms of Service Update That You Must Agree To or You Can’t Use Your Roku

roku remote with dollar bills around it

Over the last 48 hours, Roku has slowly been rolling out a mandatory update to its terms of service. In this terms it changes the dispute resolution terms but it is not clear exactly why. When the new terms and conditions message shows up on a Roku Player or TV, your only option is to accept them or turn off your Roku and stop using it.

[…]

Roku does offer a way to opt out of these new arbitration rules if you write them a letter to an address listed in the terms of service. You do need to hurry though as you only get 30 days to write a letter to Roku to opt out. Though it is unclear if that is from when you buy your Roku or agree to these new terms.

Customers are understandably confused by these new terms of service that have appeared in recent days. Raising questions about why now and why such an aggressive messaging about them that forces you to manually accept them or stop using your device.

[…]

Source: Roku Issues a Mandatory Terms of Service Update That You Must Agree To or You Can’t Use Your Roku | Cord Cutters News

Biden executive order aims to stop a few countries from buying Americans’ personal data – a watered down EU GDPR

[…]

President Joe Biden will issue an executive order that aims to limit the mass-sale of Americans’ personal data to “countries of concern,” including Russia and China. The order specifically targets the bulk sale of geolocation, genomic, financial, biometric, health and other personally identifying information.

During a briefing with reporters, a senior administration official said that the sale of such data to these countries poses a national security risk. “Our current policies and laws leave open access to vast amounts of American sensitive personal data,” the official said. “Buying data through data brokers is currently legal in the United States, and that reflects a gap in our national security toolkit that we are working to fill with this program.”

Researchers and privacy advocates have long warned about the national security risks posed by the largely unregulated multibillion-dollar data broker industry. Last fall, researchers at Duke University reported that they were able to easily buy troves of personal and health data about US military personnel while posing as foreign agents.

Biden’s executive order attempts to address such scenarios. It bars data brokers and other companies from selling large troves of Americans’ personal information to countries or entities in Russia, China, Iran, North Korea, Cuba and Venezuela either directly or indirectly.

[…]

As the White House points out, there are currently few regulations for the multibillion-dollar data broker industry. The order will do nothing to slow the bulk sale of Americans’ data to countries or companies not deemed to be a security risk. “President Biden continues to urge Congress to do its part and pass comprehensive bipartisan privacy legislation, especially to protect the safety of our children,” a White House statement says.

Source: Biden executive order aims to stop Russia and China from buying Americans’ personal data

Too little, not enough, way way way too late.

Investigators seek push notification metadata in 130 cases – this is scarier than you think

More than 130 petitions seeking access to push notification metadata have been filed in US courts, according to a Washington Post investigation – a finding that underscores the lack of privacy protection available to users of mobile devices.

The poor state of mobile device privacy has provided US state and federal investigators with valuable information in criminal investigations involving suspected terrorism, child sexual abuse, drugs, and fraud – even when suspects have tried to hide their communications using encrypted messaging.

But it also means that prosecutors in states that outlaw abortion could demand such information to geolocate women at reproductive healthcare facilities. Foreign governments may also demand push notification metadata from Apple, Google, third-party push services, or app developers for their own criminal investigations or political persecutions. Concern has already surfaced that they may have done so for several years.

In December 2023, US senator Ron Wyden (D-OR) sent a letter to the Justice Department about a tip received by his office in 2022 indicating that foreign government agencies were demanding smartphone push notification records from Google and Apple.

[…]

Apple and Google operate push notification services that relay communication from third-party servers to specific applications on iOS and Android phones. App developers can encrypt these messages when they’re stored (in transit they’re protected by TLS) but the associated metadata – the app receiving the notification, the time stamp, and network details – is not encrypted.

[…]

push notification metadata is extremely valuable to marketing organizations, to app distributors like Apple and Google, and also to government organizations and law enforcement agencies.

“In 2022, one of the largest push notification companies in the world, Pushwoosh, was found to secretly be a Russian company that deceived both the CDC and US Army into installing their technology into specific government apps,” said Edwards.

“These types of scandals are the tip of the iceberg for how push notifications can be abused, and why countless serious organizations focus on them as a source of intelligence,” he explained.

“If you sign up for push notifications, and travel around to unique locations, as the messages hit your device, specific details about your device, IP address, and location are shared with app stores like Apple and Google,” Edwards added. “And the push notification companies who support these services typically have additional details about users, including email addresses and user IDs.”

Edwards continued that other identifiers may further deprive people of privacy, noting that advertising identifiers can be connected to push notification identifiers. He pointed to Pushwoosh as an example of a firm that built its push notification ID using the iOS advertising ID.

“The simplest way to think about push notifications,” he said, is “they are just like little pre-scheduled messages from marketing vendors, sent via mobile apps. The data that is required to ‘turn on any push notification service’ is quite invasive and can unexpectedly reveal/track your location/store your movement with a third-party marketing company or one of the app stores, which is merely a court order or subpoena away from potentially exposing those personal details.”

Source: Investigators seek push notification metadata in 130 cases • The Register

Also see: Governments, Apple, Google spying on users through push notifications – they all go through Apple and Google servers (unencrypted?)!

Apple reverses hissy fit decision to remove Home Screen web apps in EU

baby throwing a tantrum

Apple has reversed its decision to limit the functionality of Home Screen web apps in Europe following an outcry from the developer community and the prospect of further investigation.

“We have received requests to continue to offer support for Home Screen web apps in iOS, therefore we will continue to offer the existing Home Screen web apps capability in the EU,” the iPhone giant said in an update to its developer documentation on Friday.

“This support means Home Screen web apps continue to be built directly on WebKit and its security architecture, and align with the security and privacy model for native apps on iOS.”

Apple said Home Screen web app support would return with the general availability of iOS 17.4, presently in beta testing and due in the next few days.

[…]

In January, Apple said it would make several changes to its iOS operating system to comply with the law. These include: Allowing third-party app stores; making its NFC hardware accessible to third-party developers for contactless payment applications; and supporting third-party browser engines as alternatives to Safari’s WebKit.

Last month, with the second beta release of iOS 17.4, it became clear Apple would impose a cost for its concessions. The iCloud goliath said, “to comply with the DMA’s requirements, we had to remove the Home Screen web apps feature in the EU.”

Essentially, Apple has to support third-party browser engines in the EU, the biz didn’t want PWAs to use those non-WebKit engines, and so it chose to just banish the web apps from its Home Screen. Now it’s changed its mind and allowed the apps to stay albeit using WebKit.

For those not in the know: The Home Screen web apps feature refers to one of the capabilities afforded to Progressive Web Apps that makes them perform and appear more like native iOS apps. It allows web apps or websites to be opened from an iOS device and take over the whole screen, just like a native app, instead of loading within a browser window.

[…]

Apple’s demotion of Home Screen web apps broke settings integration, browser storage, push notifications, icon badging, share-to-PWA, app shortcuts, and device APIs.

“Cupertino’s attempt to scuttle PWAs under cover of chaos is exactly what it appears to be: a shocking attempt to keep the web from ever emerging as a true threat to the App Store and blame regulators for Apple’s own malicious choices,”

[…]

In response to Apple’s about-face, OWA credited both vocal protests from developers and the reported decision by regulators to open an investigation into Apple’s abandonment of Home Screen web app support.

[…]

“This simply returns us back to the status quo prior to Apple’s plan to sabotage web apps for the EU,” the group said. “Apple’s over-a-decade suppression of the web in favor of the App Store continues worldwide, and their attempt to destroy web apps in the EU is just their latest attempt.

“If there is to be any silver lining, it is that this has thoroughly exposed Apple’s genuine fear of a secure, open and interoperable alternative to their proprietary App Store that they can not control or tax.”

[…]

Source: Apple reverses decision to remove Home Screen web apps in EU • The Register

Apple has thrown a real tantrum about being forced to comply with the DMCA and whilst hammering hands and feet and rolling on the floor like a toddler who can’t get their way has broken a lot of stuff. Turns out they are now kind of fixing some of it.

See also: Shameless Insult, Malicious Compliance, Junk Fees, Extortion Regime: Industry Reacts To Apple’s Proposed Changes Over Digital Markets Act

HDMI Forum blocks AMD open sourcing drivers due to 2.1

stop using hdmi

As spotted by Linux benchmarking outfit Phoronix, AMD is having problems releasing certain versions of open-source drivers it’s developed for its GPUs – because, according to the Ryzen processor designer, the HDMI Forum won’t allow the code to be released as open source. Specifically, we’re talking about AMD’s FOSS drivers for HDMI 2.1 here.

For some years, AMD GPU customers running Linux have faced difficulties getting high-definition, high-refresh-rate displays connected over HMDI 2.1 to work correctly.

[,…]

The issue isn’t missing drivers: AMD has already developed them under its GPU Open initiative. As AMD developer Alex Deucher put it in two different comments on the Freedesktop.org forum:

HDMI 2.1 is not available on Linux due to the HDMI Forum.

The HDMI Forum does not currently allow an open source HDMI 2.1 implementation.

The High-Definition Multimedia Interface is not just a type of port into which to plug your monitor. It’s a whole complex specification, of which version 2.1, the latest, was published in 2017.

[…]

HDMI cables are complicated things, including copyright-enforcing measures called High-bandwidth Digital Content Protection (HDCP) – although some of those were cracked way back in 2010. As we reported when it came out, you needed new cables to get the best out of HDMI 2.1. Since then, that edition was supplemented by version 2.1b in August 2023 – so now, you may need even newer ones.

This is partly because display technology is constantly improving. 4K displays are old tech: We described compatibility issues a decade ago, and covered 4K gaming the following year.

Such high-quality video brings two consequences. On the one hand, the bandwidth the cables are expected to carry has increased substantially. On the other, some forms of copying or duplication involving a reduction in image quality – say, halving the vertical and horizontal resolution – might still result in an perfectly watchable quality copy.

[…]

As we have noted before, we prefer DisplayPort to HDMI, and one reason is that you can happily drive an HDMI monitor from a DisplayPort output using a cheap cable, or if you have an HDMI cable to hand, an inexpensive adapter. We picked a random example which is a bargain at under $5.

But the converse does not hold. You can’t drive a DisplayPort screen from an HDMI port. That needs an intelligent adaptor which can resample the image and regenerate a display. Saying that, they are getting cheaper, and for lower-quality video such as old VGA or SCART outputs, these days, a circa-$5 microcontroller board such as a Raspberry Pi Pico can do the job, and you can build your own.

Source: HDMI Forum ‘blocks AMD open sourcing its 2.1 drivers’ • The Register

Scammers Are Now Scanning Faces To Defeat Age verification Biometric Security Measures

For quite some time now we’ve been pointing out the many harms of age verification technologies, and how they’re a disaster for privacy. In particular, we’ve noted that if you have someone collecting biometric information on people, that data itself becomes a massive risk since it will be targeted.

And, remember, a year and a half ago, the Age Verification Providers Association posted a comment right here on Techdirt saying not to worry about the privacy risks, as all they wanted to do was scan everyone’s face to visit a website (perhaps making you turn to the left or right to prove “liveness”).

Anyway, now a report has come out that some Chinese hackers have been tricking people into having their faces scanned, so that the hackers can then use the resulting scan to access accounts.

Attesting to this, cybersecurity company Group-IB has discovered the first banking trojan that steals people’s faces. Unsuspecting users are tricked into giving up personal IDs and phone numbers and are prompted to perform face scans. These images are then swapped out with AI-generated deepfakes that can easily bypass security checkpoints

The method — developed by a Chinese-based hacking family — is believed to have been used in Vietnam earlier this month, when attackers lured a victim into a malicious app, tricked them into face scanning, then withdrew the equivalent of $40,000 from their bank account. 

Cool cool, nothing could possibly go wrong in now requiring more and more people to normalize the idea of scanning your face to access a website. Nothing at all.

And no, this isn’t about age verification, but still, the normalization of facial scanning is a problem, as it’s such an obvious target for scammers and hackers.

Source: As Predicted: Scammers Are Now Scanning Faces To Defeat Biometric Security Measures | Techdirt

Meta will start collecting much more “anonymized” data about Quest headset usage

Meta will soon begin “collecting anonymized data” from users of its Quest headsets, a move that could see the company aggregating information about hand, body, and eye tracking; camera information; “information about your physical environment”; and information about “the virtual reality events you attend.”

In an email sent to Quest users Monday, Meta notes that it currently collects “the data required for your Meta Quest to work properly.” Starting with the next software update, though, the company will begin collecting and aggregating “anonymized data about… device usage” from Quest users. That anonymized data will be used “for things like building better experiences and improving Meta Quest products for everyone,” the company writes.

A linked help page on data sharing clarifies that Meta can collect anonymized versions of any of the usage data included in the “Supplemental Meta Platforms Technologies Privacy Policy,” which was last updated in October. That document lists a host of personal information that Meta can collect from your headset, including:

  • “Your audio data, when your microphone preferences are enabled, to animate your avatar’s lip and face movement”
  • “Certain data” about hand, body, and eye tracking, “such as tracking quality and the amount of time it takes to detect your hands and body”
  • Fitness-related information such as the “number of calories you burned, how long you’ve been physically active, [and] your fitness goals and achievements”
  • “Information about your physical environment and its dimensions” such as “the size of walls, surfaces, and objects in your room and the distances between them and your headset”
  • “Voice interactions” used when making audio commands or dictations, including audio recordings and transcripts that might include “any background sound that happens when you use those services” (these recordings and transcriptions are deleted “immediately” in most cases, Meta writes)
  • Information about “your activity in virtual reality,” including “the virtual reality events you attend”

The anonymized collection data is used in part to “analyz[e] device performance and reliability” to “improve the hardware and software that powers your experiences with Meta VR Products.”

What does Meta know about what you're doing in VR?
Enlarge / What does Meta know about what you’re doing in VR?
Meta

Meta’s help page also lists a small subset of “additional data” that headset users can opt out of sharing with Meta. But there’s no indication that Quest users can opt out of the new anonymized data collection policies entirely.

These policies only seem to apply to users who make use of a Meta account to access their Quest headsets, and those users are also subject to Meta’s wider data-collection policies. Those who use a legacy Oculus account are subject to a separate privacy policy that describes a similar but more limited set of data-collection practices.

Not a new concern

Meta is clear that the data it collects “is anonymized so it does not identify you.” But here at Ars, we’ve long covered situations where data that was supposed to be “anonymous” was linked back to personally identifiable information about the people who generated it. The FTC is currently pursuing a case against Kochava, a data broker that links de-anonymized geolocation data to a “staggering amount of sensitive and identifying information,” according to the regulator.

Concerns about VR headset data collection dates back to when Meta’s virtual reality division was still named Oculus. Shortly after the launch of the Oculus Rift in 2016, Senator Al Franken (D-Minn.) sent an open letter to the company seeking information on “the extent to which Oculus may be collecting Americans’ personal information, including sensitive location data, and sharing that information with third parties.”

In 2020, the company then called Facebook faced controversy for requiring Oculus users to migrate to a Facebook account to continue using their headsets. That led to a temporary pause of Oculus headset sales in Germany before Meta finally offered the option to decouple its VR accounts from its social media accounts in 2022.

Source: Meta will start collecting “anonymized” data about Quest headset usage | Ars Technica

Canadian college M&M Vending machines secretly scanning faces – revealed by error message

[…]

The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, “Invenda.Vending.FacialRecognitionApp.exe,” displayed after the machine failed to launch a facial recognition application that nobody expected to be part of the process of using a vending machine.

Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).
Enlarge / Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

“Hey, so why do the stupid M&M machines have facial recognition?” SquidKid47 pondered.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS.

Stanley sounded alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines without ever requesting consent.

This frustrated Stanley, who discovered that Canada’s privacy commissioner had years ago investigated a shopping mall operator called Cadillac Fairview after discovering some of the malls’ informational kiosks were secretly “using facial recognition software on unsuspecting patrons.”

Only because of that official investigation did Canadians learn that “over 5 million nonconsenting Canadians” were scanned into Cadillac Fairview’s database, Stanley reported. Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive facial recognition data without consent for Invenda clients like Mars remain unclear.

Stanley’s report ended with a call for students to demand that the university “bar facial recognition vending machines from campus.”

A University of Waterloo spokesperson, Rebecca Elming, eventually responded, confirming to CTV News that the school had asked to disable the vending machine software until the machines could be removed.

[…]

Source: Vending machine error reveals secret face image database of college students | Ars Technica

European human rights court says backdooring encrypted comms is against human rights

a picture of an eye staring at your from your mobile phone

The European Court of Human Rights (ECHR) has ruled that laws requiring crippled encryption and extensive data retention violate the European Convention on Human Rights – a decision that may derail European data surveillance legislation known as Chat Control.

The Court issued a decision on Tuesday stating that “the contested legislation providing for the retention of all internet communications of all users, the security services’ direct access to the data stored without adequate safeguards against abuse and the requirement to decrypt encrypted communications, as applied to end-to-end encrypted communications, cannot be regarded as necessary in a democratic society.”

The “contested legislation” mentioned above refers to a legal challenge that started in 2017 after a demand from Russia’s Federal Security Service (FSB) that messaging service Telegram provide technical information to assist the decryption of a user’s communication. The plaintiff, Anton Valeryevich Podchasov, challenged the order in Russia but his claim was dismissed.

In 2019, Podchasov brought the matter to the ECHR. Russia joined the Council of Europe – an international human rights organization – in 1996 and was a member until it withdrew in March 2022 following its illegal invasion of Ukraine. Because the 2019 case predates Russia’s withdrawal, the ECHR continued to consider the matter.

The Court concluded that the Russian law requiring Telegram “to decrypt end-to-end encrypted communications risks amounting to a requirement that providers of such services weaken the encryption mechanism for all users.” As such, the Court considers that requirement disproportionate to legitimate law enforcement goals.

While the ECHR decision is unlikely to have any effect within Russia, it matters to countries in Europe that are contemplating similar decryption laws – such as Chat Control and the UK government’s Online Safety Act.

Chat Control is shorthand for European data surveillance legislation that would require internet service providers to scan digital communications for illegal content – specifically child sexual abuse material and potentially terrorism-related information. Doing so would necessarily entail weakening the encryption that keeps communication private.

Efforts to develop workable rules have been underway for several years and continue to this day, despite widespread condemnation from academics, privacy-oriented orgs, and civil society groups.

Patrick Breyer, a member of the European parliament for the Pirate Party, hailed the ruling for demonstrating that Chat Control is incompatible with EU law.

“With this outstanding landmark judgment, the ‘client-side scanning’ surveillance on all smartphones proposed by the EU Commission in its chat control bill is clearly illegal,” said Breyer.

“It would destroy the protection of everyone instead of investigating suspects. EU governments will now have no choice but to remove the destruction of secure encryption from their position on this proposal – as well as the indiscriminate surveillance of private communications of the entire population!” ®

Source: European human rights court says no to weakened encryption • The Register

23andMe Thinks ‘Mining’ Your DNA Data Is Its Last Hope

23andMe is in a death spiral. Almost everyone who wants a DNA test already bought one, a nightmare data breach ruined the company’s reputation, and 23andMe’s stock is so close to worthless it might get kicked off the Nasdaq. CEO Anne Wojcicki is on a crisis tour, promising investors the company isn’t going out of business because she has a new plan: 23andMe is going to double down on mining your DNA data and selling it to pharmaceutical companies.

“We now have the ability to mine the dataset for ourselves, as well as to partner with other groups,” Wojcicki said in an interview with Wired. “It’s a real resource that we could apply to a number of different organizations for their own drug discovery.”

That’s been part of the plan since day one, but now it looks like it’s going to happen on a much larger scale. 23andMe has always coerced its customers into giving the company consent to share their DNA for “research,” a friendlier way of saying “giving it to pharmaceutical companies.” The company enjoyed an exclusive partnership with pharmaceutical giant GlaxoSmithKline, but apparently the drug maker already sucked the value out of your DNA, and that deal is running out. Now, 23andMe is looking for new companies who want to take a look at your genes.

[…]

the most exciting opportunity for “improvements” is that 23andMe and the pharmaceutical industry get to develop new drugs. There’s a tinge of irony here. Any discoveries that 23andMe makes come from studying DNA samples that you paid the company to collect.

[…]

The problem with 23andMe’s consumer-facing business is the company sells a product you only need once in a lifetime. Worse, the appeal of a DNA test for most people is the novelty of ancestry results, but if your brother already paid for a test, you already know the answers.

[…]

it’s spent years trying to brand itself as a healthcare service, and not just a $79 permission slip to tell people you’re Irish. In fact, the company thinks you should buy yourself a recurring annual subscription to something called 23andMe+ Total Health. It only costs $1,188 a year.

[…]

The secret is you just can’t learn a ton about your health from genetic screenings, aside from tests for specific diseases that doctors rarely order unless you have a family history.

[…]

What do you get with these subscriptions? It’s kind of vague. Depending on the package, they include a service that “helps you understand how genetics and lifestyle can impact your likelihood of developing certain conditions,” testing for rare genetic conditions, enhanced ancestry features, and more. Essentially, they’ll run genetic tests that you may not need. Then, they may or may not recommend that you talk to a doctor, because they can’t offer you actual medical care.

You could also skip the middleman and start with a normal conversation with your doctor, who will order genetic tests if you need them and bill your insurance company

[…]

If 23andMe company survives, the first step is going to be deals that give more companies access to look at your genetics than ever before. But if 23andMe goes out of business, it’ll get purchased or sold off for parts, which means other companies will get a look at your data anyway.

Source: 23andMe Admits ‘Mining’ Your DNA Data Is Its Last Hope

What this piece misses is the danger of whom the data is sold to – or if it is leaked (which it was). Insurance companies may refuse to insure you. Your DNA may be faked. Your unique and unchangeable identity – and those of your family – has been stolen.

US judge dismisses authors’ ridiculous copyright claim against OpenAI

A US judge has dismissed some of the claims made by writers in a copyright infringement lawsuit against OpenAI, though gave the wordsmiths another chance to amend their complaint.

The case – Paul Tremblay et al vs OpenAI – kicked off in 2023 when novelists Paul Tremblay, Christopher Golden, and Richard Kadrey, and writer-comedian-actress Sarah Silverman accused OpenAI of illegally scraping their work without consent to train the AI champion’s large language models.

The creators claimed that ChatGPT produced accurate summaries of their books and offered that as evidence that their writing had been ripped off. Since OpenAI’s neural networks learn to generate text from its training data, the group argued that its output should be considered a “derivative work” of their IP.

The plaintiffs also alleged that OpenAI’s model deliberately omitted so-called copyright management information, or CMI – think books’ ISBN numbers and authors’ names – when it produced output based on their works. They also accused the startup of unfair competition, negligence, and unjust enrichment.

All in all, the writers are upset that, as alleged, OpenAI not only used copyrighted work without permission and recompense to train its models, its model generates prose that closely apes their own, which one might say would hinder their ability to profit from that work.

Federal district Judge Araceli Martínez-Olguín, sitting in northern California, was asked by OpenAI to dismiss the authors’ claims in August.

In a fresh order [PDF] released on Monday, Martínez-Olguín delivered the bad news for the scribes.

“Plaintiffs fail to explain what the outputs entail or allege that any particular output is substantially similar – or similar at all – to their books. Accordingly, the court dismisses the vicarious copyright infringement claim,” she wrote. She also opined that the authors couldn’t prove that CMI had been stripped from the training data or that its absence indicated an intent to hide any copyright infringement.

Claims of unlawful business practices, fraudulent conduct, negligence, and unjust enrichment were similarly dismissed.

The judge did allow a claim of unfair business practices to proceed.

“Assuming the truth of plaintiffs’ allegations – that defendants used plaintiffs’ copyrighted works to train their language models for commercial profit – the court concludes that defendants’ conduct may constitute an unfair practice,” Martínez-Olguín wrote.

Although this case against OpenAI has been narrowed, it clearly isn’t over yet. The plaintiffs have been given another opportunity to amend their initial arguments alleging violation of copyright by filing a fresh complaint before March 13.

The Register has asked OpenAI and a lawyer representing the plaintiffs for comment. We’ll let you know if they have anything worth saying. ®

Source: US judge dismisses authors’ copyright claim against OpenAI • The Register

See also: A Bunch Of Authors Sue OpenAI Claiming Copyright Infringement, Because They Don’t Understand Copyright

and: OpenAI disputes authors’ claims that every ChatGPT response is a derivative work, it’s transformative

France uncovers a vast Russian disinformation campaign in Europe

RUSSIA HAS been at the forefront of internet disinformation techniques at least since 2014, when it pioneered the use of bot farms to spread fake news about its invasion of Crimea. According to French authorities, the Kremlin is at it again. On February 12th Viginum, the French foreign-disinformation watchdog, announced it had detected preparations for a large disinformation campaign in France, Germany, Poland and other European countries, tied in part to the second anniversary of Vladimir Putin’s invasion of Ukraine and the elections to the European Parliament in June.

Viginum said it had uncovered a Russian network of 193 websites which it codenames “Portal Kombat”. Most of these sites, such as topnews.uz.ua, were created years ago and many were left dormant. Over 50 of them, such as news-odessa.ru and pravda-en.com, have been created since 2022. Current traffic to these sites, which exist in various languages including French, German, Polish and English, is low. But French authorities think they are ready to be activated aggressively as part of what one official calls a “massive” wave of Russian disinformation.

Viginum says it watched the sites between September and December 2023. It concluded that they do not themselves generate news stories, but are designed to spread “deceptive or false” content about the war in Ukraine, both on websites and via social media. The underlying objective is to undermine support for Ukraine in Europe. According to the French authorities, the network is controlled by a single Russian organisation.

[…]

For France, the detection of this latest Russian destabilisation effort comes after a series of campaigns that it has attributed to Moscow. Last November the French foreign ministry denounced a “Russian digital interference operation” that spread photos of Stars of David stencilled on walls in a neighbourhood of Paris, in order to stir intercommunal tension in France shortly after the start of the Israel-Hamas conflict. Viginum then detected a network of 1,095 bots on X (formerly Twitter), which published 2,589 posts. It linked this to a Russian internet complex called Recent Reliable News, known for cloning the websites of Western media outlets in order to spread fake news; the EU has dubbed that complex “Doppelgänger”.

France held the same network responsible in June 2023 for the cloning of various French media websites, as well as that of the French foreign ministry. On the cloned ministry website, hackers posted a statement suggesting, falsely, that France was to introduce a 1.5% “security tax” to finance military aid to Ukraine.

[…]

The EU wants to criminalize AI-generated deepfakes and the non-consensual sending of intimate images

[…] the European Council and Parliament have agreed with the proposal to criminalize, among other things, different types of cyber-violence. The proposed rules will criminalize the non-consensual sharing of intimate images, including deepfakes made by AI tools, which could help deter revenge porn. Cyber-stalking, online harassment, misogynous hate speech and “cyber-flashing,” or the sending of unsolicited nudes, will also be recognized as criminal offenses.

The commission says that having a directive for the whole European Union that specifically addresses those particular acts will help victims in Member States that haven’t criminalized them yet. “This is an urgent issue to address, given the exponential spread and dramatic impact of violence online,” it wrote in its announcement.

[…]

In its reporting, Politico suggested that the recent spread of pornographic deepfake images using Taylor Swift’s face urged EU officials to move forward with the proposal.

[…]

“The final law is also pending adoption in Council and European Parliament,” the EU Council said. According to Politico, if all goes well and the bill becomes a law soon, EU states will have until 2027 to enforce the new rules.

Source: The EU wants to criminalize AI-generated porn images and deepfakes

The original article has a seriously misleading title, I guess for clickbait.