The Linkielist

Linking ideas with the world

The Linkielist

Danish Minister of Justice and chief architect of the current Chat Control proposal, Peter Hummelgaard:

Danish Minister of Justice, Peter Hummelgaard.

“We must break with the totally erroneous perception that it is everyone’s civil liberty to communicate on encrypted messaging services.”

Share your thoughts via https://fightchatcontrol.eu/, or to jm@jm.dk directly.

Source: https://www.ft.dk/samling/20231/almdel/REU/spm/1426/index.htm

In the answers he cites “but we must protect the children” – as soon as that argument is trotted out have a good look at what they are taking away from you. After all, who can be against the safety of children? But blanket surveillance is bad for children and awful for society. If you know you are being watched, you can’t speak freely, you can’t voice your opinion and democracy cannot function. THAT is bad for the children.

There is something rotten in the state of Denmark. Big Brother, 1984, they were warnings, not manuals.

Source: https://mastodon.social/@chatcontrol/115204439983078498

More discussion: https://www.reddit.com/r/europe/comments/1nhdtoz/danish_minister_of_justice_we_must_break_with_the/

PS I would not buy a used camel from this creep.

Swiss government may disable privacy tech, stoking fears of mass surveillance

The Swiss government could soon require service providers with more than 5,000 users to collect government-issued identification, retain subscriber data for six months and, in many cases, disable encryption.

The proposal, which is not subject to parliamentary approval, has alarmed privacy and digital-freedoms advocates worldwide because of how it will destroy anonymity online, including for people located outside of Switzerland.

A large number of virtual private network (VPN) companies and other privacy-preserving firms are headquartered in the country because it has historically had liberal digital privacy laws alongside its famously discreet banking ecosystem.

Proton, which offers secure and end-to-end encrypted email along with an ultra-private VPN and cloud storage, announced on July 23 that it is moving most of its physical infrastructure out of Switzerland due to the proposed law.

The company is investing more than €100 million in the European Union, the announcement said, and plans to help develop a “sovereign EuroStack for the future of our home continent.” Switzerland is not a member of the EU.

Proton said the decision was prompted by the Swiss government’s attempt to “introduce mass surveillance.”

Proton founder and CEO Andy Yen told Radio Télévision Suisse (RTS) that the suggested regulation would be illegal in the EU and United States.

“The only country in Europe with a roughly equivalent law is Russia,” Yen said.

[…]

Internet users would no longer be able to register for a service with just an email address or anonymously and would instead have to provide their passport, drivers license or another official ID to subscribe, said Chloé Berthélémy, senior policy adviser at European Digital Rights (eDRI), an association of civil and human rights organizations from across Europe.

The regulation also includes a mass data retention obligation requiring that service providers keep users’ email addresses, phone numbers and names along with IP addresses and device port numbers for six months, Berthélémy said. Port numbers are unique identifiers that send data to a specific application or service on a computer.

All authorities would need to do to obtain the data, Berthélémy said, is make a simple request that would circumvent existing legal control mechanisms such as court orders.

“The right to anonymity is supporting a very wide range of communities and individuals who are seeking safety online,” Berthélémy said.

“In a world where we have increasing attacks from governments on specific minority groups, on human rights defenders, journalists, any kind of watchdogs and anyone who holds those in power accountable, it’s very crucial that we … preserve our privacy online in order to do those very crucial missions.”

Source: Swiss government looks to undercut privacy tech, stoking fears of mass surveillance | The Record from Recorded Future News

Proton Mail Suspended Journalist Accounts at Request of some Cybersecurity Agency without any process

The company behind the Proton Mail email service, Proton, describes itself as a “neutral and safe haven for your personal data, committed to defending your freedom.”

But last month, Proton disabled email accounts belonging to journalists reporting on security breaches of various South Korean government computer systems following a complaint by an unspecified cybersecurity agency. After a public outcry, and multiple weeks, the journalists’ accounts were eventually reinstated — but the reporters and editors involved still want answers on how and why Proton decided to shut down the accounts in the first place.

Martin Shelton, deputy director of digital security at the Freedom of the Press Foundation, highlighted that numerous newsrooms use Proton’s services as alternatives to something like Gmail “specifically to avoid situations like this,” pointing out that “While it’s good to see that Proton is reconsidering account suspensions, journalists are among the users who need these and similar tools most.” Newsrooms like The Intercept, the Boston Globe, and the Tampa Bay Times all rely on Proton Mail for emailed tip submissions.

Shelton noted that perhaps Proton should “prioritize responding to journalists about account suspensions privately, rather than when they go viral.”

On Reddit, Proton’s official account stated that “Proton did not knowingly block journalists’ email accounts” and that the “situation has unfortunately been blown out of proportion.” Proton did not respond to The Intercept’s request for comment.

The two journalists whose accounts were disabled were working on an article published in the August issue of the long-running hacker zine Phrack. The story described how a sophisticated hacking operation — what’s known in cybersecurity parlance as an APT, or advanced persistent threat — had wormed its way into a number of South Korean computer networks, including those of the Ministry of Foreign Affairs and the military Defense Counterintelligence Command, or DCC.

The journalists, who published their story under the names Saber and cyb0rg, describe the hack as being consistent with the work of Kimsuky, a notorious North Korean state-backed APT sanctioned by the U.S. Treasury Department in 2023.

As they pieced the story together, emails viewed by The Intercept show that the authors followed cybersecurity best practices and conducted what’s known as responsible disclosure: notifying affected parties that a vulnerability has been discovered in their systems prior to publicizing the incident.

Saber and cyb0rg created a dedicated Proton Mail account to coordinate the responsible disclosures, then proceeded to notify the impacted parties, including the Ministry of Foreign Affairs and the DCC, and also notified South Korean cybersecurity organizations like the Korea Internet and Security Agency, and KrCERT/CC, the state-sponsored Computer Emergency Response Team. According to emails viewed by The Intercept, KrCERT wrote back to the authors, thanking them for their disclosure.

A note on cybersecurity jargon: CERTs are agencies consisting of cybersecurity experts specializing in dealing with and responding to security incidents. CERTs exist in over 70 countries — with some countries having multiple CERTs each specializing in a particular field such as the financial sector — and may be government-sponsored or private organizations. They adhere to a set of formal technical standards, such as being expected to react to reported cybersecurity threats and security incidents. A high-profile example of a CERT agency in the U.S. is the Cybersecurity and Infrastructure Agency, which has recently been gutted by the Trump administration.

A week after the print issue of Phrack came out, and a few days before the digital version was released, Saber and cyb0rg found that the Proton account they had set up for the responsible disclosure notifications had been suspended. A day later, Saber discovered that his personal Proton Mail account had also been suspended. Phrack posted a timeline of the account suspensions at the top of the published article, and later highlighted the timeline in a viral social media post. Both accounts were suspended owing to an unspecified “potential policy violation,” according to screenshots of account login attempts reviewed by The Intercept.

The suspension notice instructed the authors to fill out Proton’s abuse appeals form if they believed the suspension was in error. Saber did so, and received a reply from a member of Proton Mail’s Abuse Team who went by the name Dante.

In an email viewed by The Intercept, Dante told Saber that their account “has been disabled as a result of a direct connection to an account that was taken down due to violations of our terms and conditions while being used in a malicious manner.” Dante also provided a link to Proton’s terms of service, going on to state, “We have clearly indicated that any account used for unauthorized activities, will be sanctioned accordingly.” The response concluded by stating, “We consider that allowing access to your account will cause further damage to our service, therefore we will keep the account suspended.”

On August 22, a Phrack editors reached out to Proton, writing that no hacked data was passed through the suspended email accounts, and asked if the account suspension incident could be deescalated. After receiving no response from Proton, the editor sent a follow-up email on September 6. Proton once again did not reply to the email.

On September 9, the official Phrack X account made a post asking Proton’s official account asking why Proton was “cancelling journalists and ghosting us,” adding: “need help calibrating your moral compass?” The post quickly went viral, garnering over 150,000 views.

Proton’s official account replied the following day, stating that Proton had been “alerted by a CERT that certain accounts were being misused by hackers in violation of Proton’s Terms of Service. This led to a cluster of accounts being disabled. Our team is now reviewing these cases individually to determine if any can be restored.” Proton then stated that they “stand with journalists” but “cannot see the content of accounts and therefore cannot always know when anti-abuse measures may inadvertently affect legitimate activism.”

Proton did not publicly specify which CERT had alerted them, and didn’t answer The Intercept’s request for the name of the specific CERT which had sent the alert. KrCERT also did not reply to The Intercept’s question about whether they were the CERT that had sent the alert to Proton.

Later in the day, Proton’s founder and CEO Andy Yen posted on X that the two accounts had been reinstated. Neither Yen nor Proton explained why the accounts had been reinstated, whether they had been found to not violate the terms of service after all, why had they been suspended in the first place, or why a member of the Proton Abuse Team reiterated that the accounts had violated the terms of service during Saber’s appeals process.

Phrack noted that the account suspensions created a “real impact to the author. The author was unable to answer media requests about the article.” The co-authors, Phrack pointed out, were also in the midst of the responsible disclosure process and working together with the various affected South Korean organizations to help fix their systems. “All this was denied and ruined by Proton,” Phrack stated.

Phrack editors said that the incident leaves them “concerned what this means to other whistleblowers or journalists. The community needs assurance that Proton does not disable accounts unless Proton has a court order or the crime (or ToS violation) is apparent.”

Source: Proton Mail Suspended Journalist Accounts at Request of Cybersecurity Agency

If Proton can’t view the content of accounts, how did Proton verify some random CERTs claims to make the decision to close the accounts? And how did Proton review to see if they could be restored? Is it Proton policy to decide that people are guilty before proven innocent? This attitude justifies people blowing up about this incident – because it shows how vulnerable they are to random whims of Proton instead of any kind of transparent diligent process.

We beat Chat Control but the fight isn’t over – another surveillance law that mandates companies to save user data for Europol is making its way right now and there is less than 24 hours to give the EU feedback!

Please follow this link to the questionnaire and help save our future – otherwise total surveillance like never seen before will strip you of every privacy and later fundamental rights you have as a EU citizen

++++++++++++++++++++++++++++

Information

The previous data retention law was declared illegal in 2014 by CJEU (EU’s highest court) for being mass surveillance and violating human rights.

Since most EU states refused to follow the court order and the EU commission refused to enforce it, CJEU recently caved in to political pressure and changed their stance on mass surveillance, making it legal.

And that instantly spawned this data retention law that is more far fetching than the original, that was deemed illegal. Here you can read the entire plan that EU is following. Briefly:

they want to sanction unlicensed messaging apps, hosting services and websites that don’t spy on users (and impose criminal penalties)

mandatory data retention, all your online activity must be tied to your identity

end of privacy friendly VPN’s and other services

cooperate with hardware manufacturers to ensure lawful access by design (backdoors for phones and computers)

prison for everybody who doesn’t comply

If you don’t know what the best options for some questions are, privacy wise, check out this answering guide by Edri(european digital rights organization)

Source: https://www.reddit.com/r/BuyFromEU/comments/1neecov/we_beat_chat_control_but_the_fight_isnt_over/

18 popular VPNs turn out to belong to 3 different owners – and contain insecurities as well

A new peer-reviewed study alleges that 18 of the 100 most-downloaded virtual private network (VPN) apps on the Google Play Store are secretly connected in three large families, despite claiming to be independent providers. The paper doesn’t indict any of our picks for the best VPN, but the services it investigates are popular, with 700 million collective downloads on Android alone.

The study, published in the journal of the Privacy Enhancing Technologies Symposium (PETS), doesn’t just find that the VPNs in question failed to disclose behind-the-scenes relationships, but also that their shared infrastructures contain serious security flaws. Well-known services like Turbo VPN, VPN Proxy Master and X-VPN were found to be vulnerable to attacks capable of exposing a user’s browsing activity and injecting corrupted data.

Titled “Hidden Links: Analyzing Secret Families of VPN apps,” the paper was inspired by an investigation by VPN Pro, which found that several VPN companies each were selling multiple apps without identifying the connections between them. This spurred the “Hidden Links” researchers to ask whether the relationships between secretly co-owned VPNs could be documented systematically.

[…]

Family A consists of Turbo VPN, Turbo VPN Lite, VPN Monster, VPN Proxy Master, VPN Proxy Master Lite, Snap VPN, Robot VPN and SuperNet VPN. These were found to be shared between three providers — Innovative Connecting, Lemon Clove and Autumn Breeze. All three have all been linked to Qihoo 360, a firm based in mainland China and identified as a “Chinese military company” by the US Department of Defense.

Family B consists of Global VPN, XY VPN, Super Z VPN, Touch VPN, VPN ProMaster, 3X VPN, VPN Inf and Melon VPN. These eight services, which are shared between five providers, all use the same IP addresses from the same hosting company.

Family C consists of X-VPN and Fast Potato VPN. Although these two apps each come from a different provider, the researchers found that both used very similar code and included the same custom VPN protocol.

If you’re a VPN user, this study should concern you for two reasons. The first problem is that companies entrusted with your private activities and personal data are not being honest about where they’re based, who owns them or who they might be sharing your sensitive information with. Even if their apps were all perfect, this would be a severe breach of trust.

But their apps are far from perfect, which is the second problem. All 18 VPNs across all three families use the Shadowsocks protocol with a hard-coded password, which makes them susceptible to takeover from both the server side (which can be used for malware attacks) and the client side (which can be used to eavesdrop on web activity).

[…]

 

Source: Researchers find alarming overlaps among 18 popular VPNs

So Spotify Public Links Now Show Your Personal Information. You Need to Disable Spotify DMs To Get Rid Of It.

Spotify wants to be yet another messaging platform, but its new DM system has a quirk that makes me hesitant to recommend it. Spotify used to be a non-identity based platform, but things changed once it added messaging. Now, the Spotify DM system is attaching account information to song links and putting it in front of users’ eyes. That means it can accidentally leak the name and profile picture of whoever shared a link, even if they didn’t intend to give out their account information, too. Thankfully there’s a way to make links more private, and to disable Spotify DMs altogether.

How Spotify is accidentally leaking users’ information

It all starts with tracking URLs. Many major companies on the web use these. They embed information at the end of a URL to track where clicks on it came from. Which website, which page, or in Spotify’s case, which user. If you’ve generated a Share link for a song or playlist in the past, it contained your user identity string at the end. And when someone accessed and acted on that link, by adding the song or playing it, your account information was saved in their account’s identity as a connection of sorts. Maybe a little invasive, but because users couldn’t do much with that information, it was mostly just a way for Spotify to track how often people were sharing music between each other.

Before, this happened in the background and no one really cared. But with the new Spotify DM feature, connections made via tracking links are suddenly being put front and center right before users’ eyes. As spotted by Reddit user u/sporoni122, these connections are now showing up in a “Suggested” section when using Spotify DMs, even if you just happened to click on a public link once and never heard of the person who shared it. Alternatively, you might have shared a link in the past, and could be shown account information for people who clicked on it.

Even if an account is public, I could see how this would be annoying. Imagine you share a song in a Discord server where you go by an anonymous name, but someone clicks on it and finds your Spotify account, where you might go by your real name. Bam, they suddenly know who you are.

Reddit user u/Reeceeboii added that Spotify is using this URL tracking behavior to populate a list of songs and playlists shared between two users even if they happened via third-party messaging services like WhatsApp.

So, if you don’t want others to find your Spotify account through your shared songs, what do you do? Well, before posting in anonymous communities like Discord or X, try cleaning up your links first.

My colleagues and I have previously written about how you can remove tracking information from a URL automatically on iPhone, how you can use a Mac app to clean links without any effort, or how you can use an all-in one extension to get the job done regardless of platform. You can also use a website like Link Cleaner to clean up your links.

Or you can take the manual approach. In your Spotify link, remove everything at the end starting with the question mark.

What do you think so far?

So this tracked link:

https://open.spotify.com/playlist/74BUi79BzFKW7IVJBShrFD?si=28575ba800324

Becomes this clean link:

https://open.spotify.com/playlist/74BUi79BzFKW7IVJBShrFD

Here, the part with “si=“ is your identifier. Of course, if it’s a playlist you’re sharing, it will still show your name and your profile picture—that’s how the platform has always worked. So if you want to stay truly anonymous, you’ll want to keep your playlists private.

How to disable Spotify DMs

If you don’t see yourself using Spotify DMs, it might also be a good idea to just get rid of them entirely. You’ll probably still want to remove tracking information from your URLs before sharing, just for due diligence. But if you don’t want to worry about getting DMs on Spotify or having your account show up as a Suggested contact to strangers, you should also go to Settings > Privacy and social > Social features and disable Messages. That’ll opt you out of the DM feature altogether.

Disable Spotify DM.
Credit: Michelle Ehrhardt

Source: If You’ve Ever Shared a Spotify Link Publicly, You Need to Disable Spotify DMs

Age verification legislation is tanking traffic to sites that comply, and rewarding those that don’t

A new report suggests that the UK’s age verification measures may be having unforeseen knock-on effects on web traffic, with the real winners being sites that flout the law entirely.

[…]

Sure, there are ways around this if you’d rather not feed your personal data to a platform’s third-party age verification vendor. However, sites are seeing more significant consequences beyond just locking you out of your DMs. For a start, The Washington post reports web traffic to pornography sites implementing age verification has taken a totally predictable hit—but those flouting the new age check requirements have seen traffic as much as triple compared to the same time last year.

The Washington Post looked at the 90 most visited porn sites based on UK visitor data from Similarweb. Of the 90 total sites, 14 hadn’t yet deployed ‘scan your face’ age checks. The publication found that while traffic from British IP addresses to sites requiring age verification had cratered, the 14 sites without age checks “have been rewarded with a flood of traffic” from UK-based users.

It’s worth noting that VPN usage might distort the the location data of users. Still, such a surge of traffic likely brings with it a surge in income in the form of ad-revenue. Ofcom, the UK’s government-approved regulatory communications office overseeing everything from TV to the internet, may have something to say about that though. Meanwhile, sites that comply with the rules are not only losing out on ad-revenue, but are also expected to pay for the legally required age verification services on top.

[…]

Alright, stop snickering about the mental image of someone perusing porn sites professionally, and let me tell you why this is important. You may have already read that while a lot of Brits support the age verification measures broadly speaking, a sizable portion feels they’ve been implemented poorly. Indeed, a lot of the aforementioned sites that complied with the law also criticised it by linking to a petition seeking its repeal. The UK government has responded to this petition by saying it has “no plans to repeal the Online Safety Act” despite, at time of writing, over 500,000 signatures urging it to do just that.

[…]

Source: Age verification legislation is tanking traffic to sites that comply, and rewarding those that don’t | PC Gamer

Of course age verification isn’t just hitting porn sites. It is also hitting LGBTQ+ sites, public health forums, conflict reporting and global journalism and more.

And there is no way to do Age Verification privately.

Europol wants to keep all data forever for law  enforcement, says unnamed(!) official. E.U. Court of Human Rights backed encryption as basic to privacy rights in 2024 and now Big Brother Chat Control is on the agenda again (EU consultation feedback link at end)

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

In the Russian case, the users relied on Telegram’s optional “secret chat” functions, which are also end-to-end encrypted. Telegram had refused to break into chats of a handful of users, telling a Moscow court that it would have to install a back door that would work against everyone. It lost in Russian courts but did not comply, leaving it subject to a ban that has yet to be enforced.
The European court backed the Russian users, finding that law enforcement having such blanket access “impairs the very essence of the right to respect for private life” and therefore would violate Article 8 of the European Convention, which enshrines the right to privacy except when it conflicts with laws established “in the interests of national security, public safety or the economic well-being of the country.”
The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”
In addition to prior cases, the judges cited work by the U.N. human rights commissioner, who came out strongly against encryption bans in 2022, saying that “the impact of most encryption restrictions on the right to privacy and associated rights are disproportionate, often affecting not only the targeted individuals but the general population.”
High Commissioner Volker Türk said he welcomed the ruling, which he promoted during a recent visit to tech companies in Silicon Valley. Türk told The Washington Post that“encryption is a key enabler of privacy and security online and is essential for safeguarding rights, including the rights to freedom of opinion and expression, freedom of association and peaceful assembly, security, health and nondiscrimination.”
[…]
Even as the fight over encryption continues in Europe, police officials there have talked about overriding end-to-end encryption to collect evidence of crimes other than child sexual abuse — or any crime at all, according to an investigative report by the Balkan Investigative Reporting Network, a consortium of journalists in Southern and Eastern Europe.
“All data is useful and should be passed on to law enforcement, there should be no filtering … because even an innocent image might contain information that could at some point be useful to law enforcement,” an unnamed Europol police official said in 2022 meeting minutes released under a freedom of information request by the consortium.

Source: E.U. Court of Human Rights backs encryption as basic to privacy rights – The Washington Post

An ‘unnamed’ Europol police official is peak irony in this context.

Remember to leave your feedback where you can, in this case: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14680-Impact-assessment-on-retention-of-data-by-service-providers-for-criminal-proceedings-/public-consultation_en

The EU wants to know what you think about it keeping all your data for *cough* crime stuff.

The EU wants to save all your data, or as much as possible for as long as possible. To insult the victims of crime, they say that they want to do this to fight crime. How do you feel about the EU being turned into a surveillance society? Leave your voice in the link below.

Source: Data retention by service providers for criminal proceedings – impact assessment

Croatians suddenly realise that EU CSAM rules include hidden pervasive chat control surveillance, turning the EU into Big Brother – dissaprove massively.

“The Prime Minister of the Republic of Croatia Andrej Plenkovic, at yesterday’s press conference, accused the opposition of upheld the proposal of a regulation of the European Parliament and the Council on the establishment of rules for the prevention and combating sexual abuse of children COM (2022) 209, which is (unpopularly referred to as ‘chat control’ because, in the case of the adoption of the proposal in its integral form, it would allow the bodies of criminal prosecution to be subject to the legal prosecution of the private communication of all citizens.

[…]

On June 17, the Bosnian MP, as well as colleagues from the SDP, HDZ and the vast majority of other European MPs supported the Proposal for Amendments to the Directive on combating the sexual abuse and sexual exploitation of children and child pornography from 2011. Although both legislative documents were adopted within the same package of EU strategies for a more effective fight against child abuse and have a similar name, two documents are intrinsically different – one is the regulation, the other directive, they have different rapporteurs and entered the procedure for as many as two years apart.”

‘We’ve already spoken about it’

“The basic difference, however, is that the proposal to amend the Directive does not contain any mention of ‘chat control’, i.e. the mass surveillance of citizens. MP Bosnian, as well as colleagues from the party We Can! They strongly oppose the proposal for a regulation that supports the monitoring of the content of private conversations of all citizens and which will only be voted on in the European Parliament. Such a proposal directly violates Article 7. The Charter of Fundamental Rights of the European Union, as confirmed by the Court of Justice of the European Union in the ruling “Schrems I” (paragraph 94), and the same position was confirmed by the Legal Service of the Council of the EU.

In the previous European Parliament, the Greens resisted mass surveillance, focusing on monitoring suspicious users – the security services must first identify suspicious users and then monitor them, not the other way around. People who abuse the internet to commit criminal acts must be recognized and isolated by the numerous services for whom it is a job, but not in a way of mass, but focused surveillance of individuals.

We all have the right to privacy, because privacy must remain a secure space for our human identity. Finally, the representative of Bosanac invites Prime Minister Plenković to oppose this harmful proposal at the European Council and protect the right to privacy of Croatian citizens,” Gordan Bosanca’s office said in a statement.

Source: Bosnian accuses Plenkovic of lying: ‘I urge him to counter that proposal’

Parliamentary questions are being asked as well

A review conducted under the Danish Presidency examining the proposal for a regulation on combatting online child sexual abuse material – dubbed the ‘Chat Control’ or CSAM regulation – has raised new, grave concerns about the respect of fundamental rights in the EU.

As it stands, the proposal envisages mass scanning of private communications, including encrypted conversations, raising serious issues of compliance with Article 7 of the Charter of Fundamental Rights by threatening to undermine the data security of citizens, businesses and institutions. A mandatory weakening of end-to-end encryption would create security gaps open to exploitation by cybercriminals, rival states and terrorist organisations, and would also harm the competitiveness of our digital economy.

At the same time, the proposed technical approach is based on automated content analysis tools which produce high rates of false positives, creating the risk that innocent users could be wrongly incriminated, while the effectiveness of this approach in protecting children has not been proven. Parliament and the Council have repeatedly rejected mass surveillance.

  • 1.Considering the mandatory scanning of all private communications, is the proposed regulation compatible with Article 7 of the Charter of Fundamental Rights?

  • 2.How will it ensure that child protection is achieved through targeted measures that are proven to be effective, without violating the fundamental rights of all citizens?

  • 3.How does it intend to prevent the negative impact on cybersecurity and economic competitiveness caused by weakening encryption?

Source: Proposed Chat Control law presents new blow for privacy

Google wants to verify all developers’ identities, including those not on the play store in massive data grab

  • Google will soon verify the identities of developers who distribute Android apps outside the Play Store.
  • Developers must submit their information to a new Android Developer Console, increasing their accountability for their apps.
  • Rolling out in phases from September 2026, these new verification requirements are aimed at protecting users from malware by making it harder for malicious developers to remain anonymous.

 

Most Android users acquire apps from the Google Play Store, but a small number of users download apps from outside of it, a process known as sideloading. There are some nifty tools that aren’t available on the Play Store because their developers don’t want to deal with Google’s approval or verification requirements. This is understandable for hobbyist developers who simply want to share something cool or useful without the burden of shedding their anonymity or committing to user support.

[…]

Today, Google announced it is introducing a new “developer verification requirement” for all apps installed on Android devices, regardless of source. The company wants to verify the identity of all developers who distribute apps on Android, even if those apps aren’t on the Play Store. According to Google, this adds a “crucial layer of accountability to the ecosystem” and is designed to “protect users from malware and financial fraud.” Only users with “certified” Android devices — meaning those that ship with the Play Store, Play Services, and other Google Mobile Services (GMS) apps — will block apps from unverified developers from being installed.

Google says it will only verify the identity of developers, not check the contents of their apps or their origin. However, it’s worth noting that Google Play Protect, the malware scanning service integrated into the Play Store, already scans all installed apps regardless of where they came from. Thus, the new requirement doesn’t prevent malicious apps from reaching users, but it does make it harder for their developers to remain anonymous. Google likens this new requirement to ID checks at the airport, which verify the identity of travelers but not whether they’re carrying anything dangerous.

[…]

Source: Google wants to make sideloading Android apps safer by verifying developers’ identities – Android Authority

So the new requirement doesn’t make things any safer, but gives Google a whole load of new personal data for no good reason other than that they want it. I guess it’s becoming more and more time to de-Google.

Uni of Melbourne used Wi-Fi location data to ID protestors

Australia’s University of Melbourne last year used Wi-Fi location data to identify student protestors.

The University used Wi-Fi to identify students who participated in July 2024 sit-in protest. As described in a report [PDF] into the matter by the state of Victoria’s Office of the Information Commissioner, the University directed protestors to leave the building they occupied and warned those who remained could be suspended, disciplined, or reported to police.

The report says 22 chose to remain, and that the University used CCTV and WiFi location data to identify them.

The Information Commissioner found that use of CCTV to identify protestors did not breach privacy, but felt using Wi-Fi location data did because the University’s policies lacked detail.

“Given that individuals would not have been aware of why their Wi-Fi location data was collected and how it may be used, they could not exercise an informed choice as to whether to use the Wi-Fi network during the sit-in, and be aware of the possible consequences for doing so,” the report found.

As the investigation into use of location data unfolded, the University changed its policies regarding use of location data. The Office of the Information Commissioner therefore decided not to issue a formal compliance notice, and will monitor the University to ensure it complies with its undertakings.

Source: Australian uni used Wi-Fi location data to ID protestors • The Register

Privacy‑Preserving Age Verification Falls Apart On Contact With Reality

[…] Identity‑proofing creates a privacy bottleneck. Somewhere, an identity provider must verify you. Even if it later mints an unlinkable token, that provider is the weak link—and in regulated systems it will not be allowed to “just delete” your information. As Bellovin puts it:

Regulation implies the ability for governments to audit the regulated entities’ behavior. That in turn implies that logs must be kept. It is likely that such logs would include user names, addresses, ages, and forms of credentials presented.

Then there’s the issue of fraud and duplication of credentials. Accepting multiple credential types increases coverage and increases abuse; people can and do hold multiple valid IDs:

The fact that multiple forms of ID are acceptable… exacerbates the fraud issue…This makes it impossible to prevent a single person from obtaining multiple primary credentials, including ones for use by underage individuals.

Cost and access will absolutely chill speech. Identity providers are expensive. If users pay, you’ve built a wealth test for lawful speech. If sites pay, the costs roll downhill (fees, ads, data‑for‑access) and coverage narrows to the cheapest providers who may also be more susceptible to breaches:

Operating an IDP is likely to be expensive… If web sites shoulder the cost, they will have to recover it from their users. That would imply higher access charges, more ads (with their own privacy challenges), or both.

Sharing credentials drives mission creep, which will create dangers with the technology. If a token proves only “over 18,” people will share it (parents to kids, friends to friends). To deter that, providers tie tokens to identities/devices or bundle more attributes—making them more linkable and more revocable:

If the only use of the primary credential is obtaining age-verifying subcredentials, this isn’t much of a deterrent—many people simply won’t care…That, however, creates pressure for mission creep… , including opening bank accounts, employment verification, and vaccination certificates; however, this is also a major point of social control, since it is possible to revoke a primary credential and with it all derived subcredentials.

The end result, then is you’re not just attacking privacy again, but you’re creating a tool for authoritarian pressure:

Those who are disfavored by authoritarian governments may lose access not just to pornography, but to social media and all of these other services.

He also grounds it in lived reality, with a case study that shows who gets locked out first:

Consider a hypothetical person “Chris”, a non-driving senior citizen living with an adult child in a rural area of the U.S… Apart from the expense— quite possibly non-trivial for a poor family—Chris must persuade their child to then drive them 80 kilometers or more to a motor vehicles office…

There is also the social aspect. Imagine the embarrassment to all of an older parent having to explain to their child that they wish to view pornography.

None of this is an attack on the math. It’s a reminder that deployment reality ruins the cryptographic ideal. There’s more in the paper, but you get the idea

[…]

Source: Privacy‑Preserving Age Verification Falls Apart On Contact With Reality | Techdirt

Proton releases Lumo GPT 1.1:  faster, more advanced, European and actually private

Today we’re releasing a powerful update to Lumo that gives you a more capable privacy-first AI assistant offering faster, more thorough answers with improved awareness of recent events.

Guided by feedback from our community, we’ve been busy upgrading our models and adding GPUs, which we’ll continue to do thanks to the support of our Lumo Plus subscribers. Lumo 1.1 performs significantly better across the board than the first version of Lumo, so you can now use it more effectively for a variety of use cases:

  • Get help planning projects that require multiple steps — it will break down larger goals into smaller tasks
  • Ask complex questions and get more nuanced answers
  • Generate better code — Lumo is better at understanding your requests
  • Research current events or niche topics with better accuracy and fewer hallucinations thanks to improved web search

New cat, new tricks, same privacy

The latest upgrade brings more accurate responses with significantly less need for corrections or follow-up questions. Lumo now handles complex requests much more reliably and delivers the precise results you’re looking for.

In testing, Lumo’s performance has increased across several metrics:

  • Context: 170% improvement in context understanding so it can accurately answer questions based on your documents and data
  • Coding: 40% better ability to understand requests and generate correct code
  • Reasoning: Over 200% improvement in planning tasks, choosing the right tools such as web search, and working through complex multi-step problems

Most importantly, Lumo does all of this while respecting the confidentiality of your chats. Unlike every major AI platform, Lumo is open source and built to be private by design. It doesn’t keep any record of your chats, and your conversation history is secured with zero-access encryption so nobody else can see it and your data is never used to train the models. Lumo is the only AI where your conversations are actually private.

Learn about Lumo privacy

Lumo mobile apps are now open source

Unlike Big Tech AIs that spy on you, Lumo is an open source application that exclusively runs open source models. Open source is especially important in AI because it confirms that the applications and models are not being used nefariously to manipulate responses to fit a political narrative or secretly leak data. While the Lumo web client is already open source(new window), today we are also releasing the code for the mobile apps(new window). In line with Lumo being the most transparent and private AI, we have also published the Lumo security model so you can see how Lumo’s zero access encryption works and why nobody, not even Proton can access your conversation history.

Source: Introducing Lumo 1.1 for faster, advanced reasoning | Proton

The EU could be scanning your chats by October 2025 with Chat Control

Denmark kicked off its EU Presidency on July 1, 2025, and, among its first actions, lawmakers swiftly reintroduced the controversial child sexual abuse (CSAM) scanning bill to the top of the agenda.

Having been deemed by critics as Chat Control, the bill aims to introduce new obligations for all messaging services operating in Europe to scan users’ chats, even if they’re encrypted.

The proposal, however, has been failing to attract the needed majority since May 2022, with Poland’s Presidency being the last to give up on such a plan.

Denmark is a strong supporter of Chat Control. Now, the new rules could be adopted as early as October 14, 2025, if the Danish Presidency manages to find a middle ground among the countries’ members.

Crucially, according to the latest data leaked by the former MEP for the German Pirate Party, Patrick Breyer, many countries that said no to Chat Control in 2024 are now undecided, “even though the 2025 plan is even more extreme,” he added.

[…]

As per its first version, all messaging software providers would be required to perform indiscriminate scanning of private messages to look for CSAM – so-called ‘client-side scanning‘. The proposal was met with a strong backlash, and the European Court of Human Rights ended up banning all legal efforts to weaken encryption of secure communications in Europe.

In June 2024, Belgium then proposed a new text to target only shared photos, videos, and URLs, upon users’ permission. This version didn’t satisfy either the industry or voting EU members due to its coercive nature. As per the Belgian text, users must give consent to the shared material being scanned before being encrypted to keep using the functionality.

Source: The EU could be scanning your chats by October 2025 – here’s everything we know | TechRadar

Pluralistic: “Privacy preserving age verification” is bullshit

[…]

when politicians are demanding that technologists NERD HARDER! to realize their cherished impossibilities.

That’s just happened, and in relation to one of the scariest, most destructive NERD HARDER! tech policies ever to be assayed (a stiff competition). I’m talking about the UK Online Safety Act, which imposes a duty on websites to verify the age of people they communicate with before serving them anything that could be construed as child-inappropriate (a category that includes, e.g., much of Wikipedia):

https://wikimediafoundation.org/news/2025/08/11/wikimedia-foundation-challenges-uk-online-safety-act-regulations/

The Starmer government has, incredibly, developed a passion for internet regulations that are even stupider than Tony Blair’s and David Cameron’s. Requiring people to identify themselves (generally, via their credit cards) in order to look at porn will create a giant database of every kink and fetish of every person in the UK, which will inevitably leak and provide criminals and foreign spies with a kompromat system they can sort by net worth of the people contained within.

This hasn’t deterred Starmer, who insists that if we just NERD HARDER!, we can use things like “zero-knowledge proofs” to create “privacy-preserving” age verification system, whereby a service can assure itself that it is communicating with an adult without ever being able to determine who it is communicating with.

In support of this idea, Starmer and co like to cite some genuinely exciting and cool cryptographic work on privacy-preserving credential schemes. Now, one of the principal authors of the key papers on these credential schemes, Steve Bellovin, has published a paper that is pithily summed up via its title, “Privacy-Preserving Age Verification—and Its Limitations”:

https://www.cs.columbia.edu/~smb/papers/age-verify.pdf

The tldr of this paper is that Starmer’s idea will not work and cannot work. The research he relies on to defend the technological feasibility of his cherished plan does not support his conclusion.

Bellovin starts off by looking at the different approaches various players have mooted for verifying their users’ age. For example, Google says it can deploy a “behavioral” system that relies on Google surveillance dossiers to make guesses about your age. Google refuses to explain how this would work, but Bellovin sums up several of the well-understood behavioral age estimation techniques and explains why they won’t work. It’s one thing to screw up age estimation when deciding which ad to show you; it’s another thing altogether to do this when deciding whether you can access the internet.

Others say they can estimate your age by using AI to analyze a picture of your face. This is a stupid idea for many reasons, not least of which is that biometric age estimation is notoriously unreliable when it comes to distinguishing, say, 16 or 17 year olds from 18 year olds. Nevertheless, there are sitting US Congressmen who not only think this would work – they labor under the misapprehension that this is already going on:

https://pluralistic.net/2023/04/09/how-to-make-a-child-safe-tiktok/

So that just leaves the privacy-preserving credential schemes, especially the Camenisch-Lysyanskaya protocol. This involves an Identity Provider (IDP) that establishes a user’s identity and characteristics using careful document checks and other procedures. The IDP then hands the user a “primary credential” that can attest to everything the IDP knows about the user, and any number of “subcredentials” that only attest to specific facts about that user (such as their age).

These are used in zero-knowledge proofs (ZKP) – a way for two parties to validate that one of them asserts a fact without learning what that fact is in the process (this is super cool stuff). Users can send their subcredentials to a third party, who can use a ZKP to validate them without learning anything else about the user – so you could prove your age (or even just prove that you are over 18 without disclosing your age at all) without disclosing your identity.

There’s some good news for implementing CL on the web: rather than developing a transcendentally expensive and complex new system for these credential exchanges and checks, CL can piggyback on the existing Public Key Infrastructure (PKI) that powers your browser’s ability to have secure sessions when you visit a website with https:// in front of the address (instead of just http://).

However, doing so poses several difficulties, which Bellovin enumerates under a usefully frank section header: “INSURMOUNTABLE OBSTACLES.”

The most insurmountable of these obstacles is getting set up with an IDP in the first place – that is, proving who you are to some agency, but only one such agency (so you can’t create two primary credentials and share one of them with someone underage). Bellovin cites Supreme Court cases about voter ID laws and the burdens they impose on people who are poor, old, young, disabled, rural, etc.

Fundamentally, it can be insurmountably hard for a lot of people to get, say, a driver’s license, or any other singular piece of ID that they can provide to an IDP in order to get set up on the system.

The usual answer for this is for IDPs to allow multiple kinds of ID. This does ease the burden on users, but at the expense of creating fatal weaknesses in the system: if you can set up an identity with multiple kinds of ID, you can visit different IDPs and set up an ID with each (just as many Americans today have drivers licenses from more than one state).

The next obstacle is “user challenges,” like the problem of households with shared computers, or computers in libraries, hotels, community centers and other public places. The only effective way to do this is to create (expensive) online credential stores, which are likely to be out of reach of the poor and disadvantaged people who disproportionately rely on public or shared computers.

Next are the “economic issues”: this stuff is expensive to set up and maintain, and someone’s gotta pay for it. We could ask websites that offer kid-inappropriate content to pay for it, but that sets up an irreconcilable conflict of interest. These websites are going to want to minimize their costs, and everything they can do to reduce costs will make the system unacceptably worse. For example, they could choose only to set up accounts with IDPs that are local to the company that operates the server, meaning that anyone who lives somewhere else and wants to access that website is going to have to somehow get certified copies of e.g. their birth certificate and driver’s license to IDPs on the other side of the planet. The alternative to having websites foot the bill for this is asking users to pay for it – meaning that, once again, we exclude poor people from the internet.

Finally, there’s “governance”: who runs this thing? In practice, the security and privacy guarantees of the CL protocol require two different kinds of wholly independent institutions: identity providers (who verify your documents), and certificate authorities (who issue cryptographic certificates based on those documents). If these two functions take place under one roof, the privacy guarantees of the system immediately evaporate.

An IDP’s most important role is verifying documents and associating them with a specific person. But not all IDPs will be created equal, and people who wish to cheat the system will gravitate to the worst IDPs. However, lots of people who have no nefarious intent will also use these IDPs, merely because they are close by, or popular, or were selected at random. A decision to strike off an IDP and rescind its verifications will force lots of people – potentially millions of people – to start over with the whole business of identifying themselves, during which time they will be unable to access much of the web. There’s no practical way for the average person to judge whether an IDP they choose is likely to be found wanting in the future.

So we can regulate IDPs, but who will do the regulation? Age verification laws affect people outside of a government’s national territory – anyone seeking to access content on a webserver falls under age verification’s remit. Remember, IDPs handle all kinds of sensitive data: do you want Russia, say, to have a say in deciding who can be an IDP and what disclosure rules you will have to follow?

To regulate IDPs (and certificate authorities), these entities will have to keep logs, which further compromises the privacy guarantees of the CL protocol.

Looming all of this is a problem with the CL protocol as being built on regulated entities, which is that CL is envisioned as a way to do all kinds of business, from opening a bank account to proving your vaccination status or your right to work or receive welfare. Authoritarian governments who order primary credential revocations of their political opponents could thoroughly and terrifyingly “unperson” them at the stroke of a pen.

The paper’s conclusions provide a highly readable summary of these issues, which constitute a stinging rebuke to anyone contemplating age-verification schemes. These go well beyond the UK, and are in the works in Canada, Australia, the EU, Texas and Louisiana.

Age verification is an impossibility, and an impossibly terrible idea with impossibly vast consequences for privacy and the open web, as my EFF colleague Jason Kelley explained on the Malwarebytes podcast:

https://www.malwarebytes.com/blog/podcast/2025/08/the-worst-thing-for-online-rights-an-age-restricted-grey-web-lock-and-code-s06e16

Politicians – even nontechnical ones – can make good tech policy, provided they take expert feedback seriously (and distinguish it from self-interested industry lobbying).

When it comes to tech policy, wanting it badly is not enough. The fact that it would be really cool if we could get technology to do something has no bearing on whether we can actually get technology to do that thing. NERD HARDER! isn’t a policy, it’s a wish.

Wish in one hand and shit in the other and see which one will be full first:

https://www.reddit.com/r/etymology/comments/oqiic7/studying_the_origins_of_the_phrase_wish_in_one/

Source: Pluralistic: “Privacy preserving age verification” is bullshit (14 Aug 2025) – Pluralistic: Daily links from Cory Doctorow

UK passport database images used in facial recognition scans

Privacy groups report a surge in UK police facial recognition scans of databases secretly stocked with passport photos lacking parliamentary oversight.

Big Brother Watch says the UK government has allowed images from the country’s passport and immigration databases to be made available to facial recognition systems, without informing the public or parliament.

The group claims the passport database contains around 58 million headshots of Brits, plus a further 92 million made available from sources such as the immigration database, visa applications, and more.

By way of comparison, the Police National Database contains circa 20 million photos of those who have been arrested by, or are at least of interest to, the police.

In a joint statement, Big Brother Watch, its director Silkie Carlo, Privacy International, and its senior technologist Nuno Guerreiro de Sousa, described the databases and lack of transparency as “Orwellian.” They have also written to both the Home Office and the Metropolitan Police, calling for a ban on the practice.

The comments come after Big Brother Watch submitted Freedom of Information requests, which revealed a significant uptick in police scanning the databases in question as part of the force’s increasing facial recognition use.

The number of searches by 31 police forces against the passport databases rose from two in 2020 to 417 by 2023, and scans using the immigration database photos rose from 16 in 2023 to 102 the following year.

Carlo said: “This astonishing revelation shows both our privacy and democracy are at risk from secretive AI policing, and that members of the public are now subject to the inevitable risk of misidentifications and injustice. Police officers can secretly take photos from protests, social media, or indeed anywhere and seek to identify members of the public without suspecting us of having committed any crime.

“This is a historic breach of the right to privacy in Britain that must end. We’ve taken this legal action to defend the rights of tens of millions of innocent people in Britain.”

[…]

Recent data from the Met attempted to imbue a sense of confidence in facial recognition, as the number of arrests the technology facilitated passed the 1,000 mark, the force said in July.

However, privacy campaigners were quick to point out that this accounted for just 0.15 percent of the total arrests in London since 2020. They suggested that despite the shiny 1,000 number, this did not represent a valuable return on investment in the tech.

Alas, the UK has not given up on its pursuit of greater surveillance powers. Prime Minister Keir Starmer, a former human rights lawyer, is a big fan of FR, having said last year that it was the answer to preventing future riots like the ones that broke out across the UK last year following the Southport murders. ®

Source: UK passport database images used in facial recognition scans • The Register

EU Chat Control Plan Gains Support Again, Threatens Encryption, mass surveillance, age verification

A controversial European Union proposal dubbed “Chat Control” is regaining momentum, with 19 out of 27 EU member states reportedly backing the measure.

The plan would mandate that messaging platforms, including WhatsApp, Signal and Telegram, must scan every message, photo and video sent by users starting in October, even if end-to-end encryption is in place, popular French tech blogger Korben wrote on Monday.

Denmark reintroduced the proposal on July 1, the first day of its EU Council presidency. France, once opposed, is now in favor, Korben said, citing Patrick Breyer, a former member of the European Parliament for Germany and the European Pirate Party.

Belgium, Hungary, Sweden, Italy and Spain are also in favor, while Germany remains undecided. However, if Berlin joins the majority, a qualified council vote could push the plan through by mid-October, Korben said.

A qualified majority in the EU Council is achieved when two conditions are met. First, at least 55 percent of member states, meaning 15 out of 27, must vote in favor. Second, those countries must represent at least 65% of the EU’s total population.

EU Chat Control bill finds support. Source: Pavol Luptak

Pre-encryption scanning on devices

Instead of weakening encryption, the plan seeks to implement client-side scanning, meaning software embedded in users’ devices that inspects content before it is encrypted. “A bit like if the Post Office came to read all your letters in your living room before you put them in the envelope,” Korben said.

He added that the real target isn’t criminals, who use encrypted or decentralized channels, but ordinary users whose private conversations would now be open to algorithmic scrutiny.

The proposal cites the prevention of child sexual abuse material (CSAM) as its justification. However, it would result in “mass surveillance by means of fully automated real-time surveillance of messaging and chats and the end of privacy of digital correspondence,” Breyer wrote.

Beyond scanning, the package includes mandatory age verification, effectively removing anonymity from messaging platforms. Digital freedom groups are asking citizens to contact their MEPs, sign petitions and push back before the law becomes irreversible.

[…]

Source: EU Chat Control Plan Gains Support, Threatens Encryption

Age verification is going horribly wrong in the UK and mass surveillance threatens freedom of thought, something we fortunately still have in the EU. This must be stopped.

Meta eavesdropped on period-tracker app’s users, SF jury rules

Meta lost a major privacy trial on Friday, with a jury in San Francisco ruling that the Menlo Park giant had eavesdropped on the users of the popular period-tracking app Flo. The plaintiff’s lawyers who sued Meta are calling this a “landmark” victory — the tech company contends that the jury got it all wrong.

The case goes back to 2021, when eight women sued Flo and a group of other tech companies, including Google and Facebook, now known as Meta. The stakes were extremely personal. Flo asked users about their sex lives, mental health and diets, and guided them through menstruation and pregnancy. Then, the women alleged, Flo shared pieces of that data with other companies. The claims were largely based on a 2019 Wall Street Journal story and a 2021 Federal Trade Commission investigation.

Google, Flo and the analytics company Flurry, which was also part of the lawsuit, reached settlements with the plaintiffs, as is common in class action lawsuits about tech privacy. But Meta stuck it out through the entire trial and lost.

[…]

Their complaint also pointed to Facebook’s terms for its business tools, which said the company used so-called “event data” to personalize ads and content.

In a 2022 filing, the tech giant admitted that Flo used Facebook’s kit during this period and that the app sent data connected to “App Events.” But Meta denied receiving intimate information about users’ health.

Nonetheless, the jury ruled against Meta. Along with the eavesdropping decision, the group determined that Flo’s users had a reasonable expectation they weren’t being overheard or recorded, as well as ruling that Meta didn’t have consent to eavesdrop or record. The unanimous verdict was that the massive company violated the California Invasion of Privacy Act.

The jury’s ruling could have far-reaching effects. Per a June filing about the case’s class action status, more than 3.7 million people in the United States registered for Flo between November 2016 and February 2019. Those potential claimants are expected to be updated via email and on a case website; it’s not yet clear what the remittance from the trial or settlements might be.

[…]

Source: Meta eavesdropped on period-tracker app’s users, SF jury rules

Didn’t Take Long To Reveal The UK’s Online Safety Act Is Exactly The Privacy-Crushing Failure Everyone Warned About

[…]the real kicker is what content is now being gatekept behind invasive age verification systems. Users in the UK now need to submit a selfie or government ID to access:

Yes, you read that right. A law supposedly designed to protect children now requires victims of sexual assault to submit government IDs to access support communities. People struggling with addiction must undergo facial recognition scans to find help quitting drinking or smoking. The UK government has somehow concluded that access to basic health information and peer support networks poses such a grave threat to minors that it justifies creating a comprehensive surveillance infrastructure around it.

[…]

And this is all after a bunch of other smaller websites and forums shut down earlier this year when other parts of the law went into effect.

This is exactly what happens when you regulate the internet as if it’s all just Facebook and Google. The tech giants can absorb the compliance costs, but everyone else gets crushed.

The only websites with the financial capacity to work around the government’s new regulations are the ones causing the problems in the first place. And now Meta, which already has a monopoly on a number of near-essential online activities (from local sales to university group chats), is reaping the benefits.

[…]

The age verification process itself is a privacy nightmare wrapped in security theater. Users are being asked to upload selfies that get run through facial recognition algorithms, or hand over copies of their government-issued IDs to third-party companies. The facial recognition systems are so poorly implemented that people are easily fooling them with screenshots from video games—literally using images from the video game Death Stranding. This isn’t just embarrassing, it reveals the fundamental security flaw at the heart of the entire system. If these verification methods can’t distinguish between a real person and a video game character, what confidence should we have in their ability to protect the sensitive biometric data they’re collecting?

But here’s the thing: even when these systems “work,” they’re creating massive honeypots of personal data. As we’ve seen repeatedly, companies collecting biometric data and ID verification inevitably get breached, and suddenly intimate details about people’s online activity become public. Just ask the users of Tea, a women’s dating safety app that recently exposed thousands of users’ verification selfies after requiring facial recognition for “safety.”

The UK government’s response to widespread VPN usage has been predictably authoritarian. First, they insisted nothing would change:

“The Government has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.”

But then, Tech Secretary Peter Kyle deployed the classic authoritarian playbook: dismissing all criticism as support for child predators. This isn’t just intellectually dishonest—it’s a deliberate attempt to shut down legitimate policy debate by smearing critics as complicit in child abuse. It’s particularly galling given that the law Kyle is defending will do absolutely nothing to stop actual predators, who will simply migrate to unregulated platforms or use the same VPNs that law-abiding citizens are now flocking to.

[…]

Meanwhile, the actual harms it purports to address? Those remain entirely unaddressed. Predators will simply move to unregulated platforms, encrypted messaging, or services that don’t comply. Or they’ll just use VPNs. The law creates the illusion of safety while actually making everyone less secure.

This is what happens when politicians decide to regulate technology they don’t understand, targeting problems they can’t define, with solutions that don’t work. The UK has managed to create a law so poorly designed that it simultaneously violates privacy, restricts freedom, harms small businesses, and completely fails at its stated goal of protecting children.

And all of this was predictable. Hell, it was predicted. Civil society groups, activists, legal experts, all warned of these results and were dismissed by the likes of Peter Kyle as supporting child predators.

[…]

A petition set up on the UK government’s website demanding a repeal of the entire OSA received many hundreds of thousands of signatures within days. The government has already brushed it off with more nonsense, promising that the enforcer of the law, Ofcom, “will take a sensible approach to enforcement with smaller services that present low risk to UK users, only taking action where it is proportionate and appropriate, and will focus on cases where the risk and impact of harm is highest.”

But that’s a bunch of vague nonsense that doesn’t take into account that no platform wants to be on the receiving end of such an investigation, and thus will take these overly aggressive steps to avoid scrutiny.

[…]

What makes this particularly tragic is that there were genuine alternatives. Real child safety measures—better funding for mental health support, improved education programs, stronger privacy protections that don’t require mass surveillance—were all on the table. Instead, the UK chose the path that maximizes government control while minimizing actual safety.

The rest of the world should take note.

Source: Didn’t Take Long To Reveal The UK’s Online Safety Act Is Exactly The Privacy-Crushing Failure Everyone Warned About

Public ChatGPT Queries Are Getting Indexed By Google and Other Search Engines (update: fixed!)

An anonymous reader quotes a report from TechCrunch: It’s a strange glimpse into the human mind: If you filter search results on Google, Bing, and other search engines to only include URLs from the domain “https://chatgpt.com/share,” you can find strangers’ conversations with ChatGPT. Sometimes, these shared conversation links are pretty dull — people ask for help renovating their bathroom, understanding astrophysics, and finding recipe ideas. In another case, one user asks ChatGPT to rewrite their resume for a particular job application (judging by this person’s LinkedIn, which was easy to find based on the details in the chat log, they did not get the job). Someone else is asking questions that sound like they came out of an incel forum. Another person asks the snarky, hostile AI assistant if they can microwave a metal fork (for the record: no), but they continue to ask the AI increasingly absurd and trollish questions, eventually leading it to create a guide called “How to Use a Microwave Without Summoning Satan: A Beginner’s Guide.”

ChatGPT does not make these conversations public by default. A conversation would be appended with a “/share” URL only if the user deliberately clicks the “share” button on their own chat and then clicks a second “create link” button. The service also declares that “your name, custom instructions, and any messages you add after sharing stay private.” After clicking through to create a link, users can toggle whether or not they want that link to be discoverable. However, users may not anticipate that other search engines will index their shared ChatGPT links, potentially betraying personal information (my apologies to the person whose LinkedIn I discovered).
According to ChatGPT, these chats were indexed as part of an experiment. “ChatGPT chats are not public unless you choose to share them,” an OpenAI spokesperson told TechCrunch. “We’ve been testing ways to make it easier to share helpful conversations, while keeping users in control, and we recently ended an experiment to have chats appear in search engine results if you explicitly opted in when sharing.”

A Google spokesperson also weighed in, telling TechCrunch that the company has no control over what gets indexed. “Neither Google nor any other search engine controls what pages are made public on the web. Publishers of these pages have full control over whether they are indexed by search engines.”

Source: Public ChatGPT Queries Are Getting Indexed By Google and Other Search Engines

UK’s most tattooed man blocked from accessing porn online by new rules

Britain’s most tattooed man has a lot more time on his hands and not a lot else thanks to new porn laws.

The King of Ink says facial recognition tech has made it harder to chat to webcam girls, after sites started mistaking his tattooed face for a mask.

The new rules came into force last week, introducing stricter checks under Ofcom’s children’s codes.

The King of Ink, as he’s legally known, said: ‘Some of the websites are asking for picture verification, like selfies, and it’s not recognising my face.

‘It’s saying “remove your mask” because the technology is made so you can’t hold up a picture to the camera or wear a mask.

‘Would this also be the case for someone who is disfigured? They should have thought of this from day one.’

The businessman and entrepreneur, from Stechford, Birmingham, feels discriminated against on the basis of his permanent identity.

Britain's most tattooed man can't watch porn under new rules because it doesn't recognise his face King Of Ink Land King Body Art The Extreme Ink-ite (Mathew Whelan)
The tattoo enthusiast says his heavily tattooed face is a permanent part of his identity (Picture: @kingofinklandkingbodyart)

‘It’s as important as the name really and I changed my name legally,’ he said

‘Without a name you haven’t got an identity, and it’s the same with a face.

[…]

Source: UK’s most tattooed man blocked from accessing porn online by new rules | News UK | Metro News

So many ways to circumvent it, so many ways it break and really, age verification’s only winners are the tech companies that people are forced to pay money to.

Google AI is watching — how to turn off Gemini on Android

[…]Why you shouldn’t trust Gemini with your data

Gemini promises to simplify how you interact with your Android — fetching emails, summarizing meetings, pulling up files. But behind that helpful facade is an unprecedented level of centralized data collection, powered by a company known for privacy washing, (new window)misleadin(new window)g users(new window) about how their data is used, and that was hit with $2.9 billion in fines in 2024 alone, mostly for privacy violations and antitrust breaches.

Other people may see your sensitive information

Even more concerning, human reviewers may process your conversations. While Google claims these chats are disconnected from your Google account before review, that doesn’t mean much when a simple prompt like “Show me the email I sent yesterday” might return personal data like your name and phone number.

Your data may be shared beyond Google

Gemini may also share your data with third-party services. When Gemini interacts with other services, your data gets passed along and processed under their privacy policies, not just Google’s. Right now, Gemini mostly connects with Google services, but integrations with apps like WhatsApp and Spotify are already showing up. Once your data leaves Google, you cannot control where it goes or how long it’s kept.

The July 2025 update keeps Gemini connected without your consent

Before July, turning off Gemini Apps Activity automatically disabled all connected apps, so you couldn’t use Gemini to interact with other services unless you allowed data collection for AI training and human review. But Google’s July 7 update changed this behavior and now keeps Gemini connected to certain services — such as Phone, Messages, WhatsApp, and Utilities — even if activity tracking is off.

While this might sound like a privacy-conscious change — letting you use Gemini without contributing to AI training — it still raises serious concerns. Google has effectively preserved full functionality and ongoing access to your data, even after you’ve opted out.

Can you fully disable Gemini on Android?

No, and that’s by design.

[…]

How to turn off Gemini AI on Android

  1. Open the Gemini app on your Android.
  2. Tap your profile icon in the top-right corner.
  3. Go to Gemini Apps Activity*.
  1. Tap Turn offTurn off and delete activity, and follow the prompts.
  1. Select your profile icon again and go to Apps**.
  1. Tap the toggle switch to prevent Gemini from interacting with Google apps and third-party services.

*Gemini Apps Activity is a setting that controls whether your interactions with Gemini are saved to your Google account and used to improve Google’s AI systems. When it’s on, your conversations may be reviewed by humans, stored for up to 3 years, and used for AI training. When it’s off, your data isn’t used for AI training, but it’s still stored for up to 72 hours so Google can process your requests and feedback.

**Apps are the Google apps and third-party services that Gemini can access to perform tasks on your behalf — like reading your Gmail, checking your Google Calendar schedule, retrieving documents from Google Drive, playing music via Spotify, or sending messages on your behalf via WhatsApp. When Gemini is connected to these apps, it can access your personal content to fulfill prompts, and that data may be processed by Google or shared with the third-party app according to their own privacy policies.

Source: Google AI is watching — how to turn off Gemini on Android | Proton

Google Is Rolling Out Its AI Age Verification to More Services, and I’m Skeptical

Yesterday, I wrote about how YouTube is now using AI to guess your age. The idea is this: Rather than rely on the age attached to your account, YouTube analyzes your activity on its platform, and makes a determination based on how your activity corresponds to others users. If the AI thinks you’re an adult, you can continue on; if it thinks your behavior aligns with that of a teenage user, it’ll put restrictions and protections on your account.

Now, Google is expanding its AI age verification tools beyond just its video streaming platform, to other Google products as well. As with YouTube, Google is trialing this initial rollout with a small pool of users, and based on its results, will expand the test to more users down the line. But over the next few weeks, your Google Account may be subject to this new AI, whose only goal is to estimate how old you are.

That AI is trained to look for patterns of behavior across Google products associated with users under the age of 18. That includes the categories of information you might be searching for, or the types of videos you watch on YouTube. Google’s a little cagey on the details, but suffice it to say that the AI is likely snooping through most, if not all, of what you use Google and its products for.

Restrictions and protections on teen Google accounts

We do know some of the restrictions and protections Google plans to implement when it detects a user is under 18 years old. As I reported yesterday, that involves turning on YouTube’s Digital Wellbeing tools, such as reminders to stop watching videos, and, if it’s late, encouragements to go to bed. YouTube will also limit repetitive views of certain types of content.

In addition to these changes to YouTube, you’ll also find you can no longer access Timeline in Maps. Timeline saves your Google Maps history, so you can effectively travel back through time and see where you’ve been. It’s a cool feature, but Google restricts access to users 18 years of age or older. So, if the AI detects you’re underage, no Timeline for you.

[…]

Source: Google Is Rolling Out Its AI Age Verification to More Services, and I’m Skeptical

Of course there is no mention of how to ask for recourse if the AI gets it wrong.