UK decides AI still cannot patent inventions

A recent IPO consultation found many experts doubted AI was currently able to invent without human assistance.

Current law allowed humans to patent inventions made with AI assistance, the government said, despite “misperceptions” this was not the case.

Last year, the Court of Appeal ruled against Stephen Thaler, who had said his Dabus AI system should be recognised as the inventor in two patent applications, for:

  • a food container
  • a flashing light

The judges sided, by a two-to-one majority, with the IPO, which had told him to list a real person as the inventor.

“Only a person can have rights – a machine cannot,” wrote Lady Justice Laing in her judgement.

“A patent is a statutory right and it can only be granted to a person.”

But the IPO also said it would “need to understand how our IP system should protect AI-devised inventions in the future” and committed to advancing international discussions, with a view to keeping the UK competitive.

In July 2021, in a case also brought by Mr Thaler, an Australian court decided AI systems could be recognised as inventors for patent purposes.

Days earlier, South Africa had issued a similar ruling.

However, the Australian decision was later overturned on appeal.

Many AI systems are trained on large amounts of data copied from the internet.

And, on Tuesday, the IPO also announced plans to change copyright law to allow anyone with lawful access – rather than only those conducting non-commercial research, as now – to do this, to “promote the use of AI technology, and wider ‘data mining’ techniques, for the public good”.

Rights holders will still be able to control and charge for access to their works but no longer charge extra for the ability to mine them.

An increasing number of people are using AI tools such as DALL.E 2 to create images resembling a work of human art.

And Mr Thaler has recently sued the US Copyright Office over its refusal to recognise a software system as the “author” of an image, the Register reported.

Source: UK decides AI still cannot patent inventions – BBC News

Coinbase Is Selling Data on Crypto and ‘Geotracking’ to ICE

Coinbase Tracer, the analytics arm of the cryptocurrency exchange Coinbase, has signed a contract with U.S. Immigrations and Customs Enforcement that would allow the agency access to a variety of features and data caches, including “historical geo tracking data.”

Coinbase Tracer, according to the website, is for governments, crypto businesses, and financial institutions. It allows these clients the ability to trace transactions within the blockchain. It is also used to “investigate illicit activities including money laundering and terrorist financing” and “screen risky crypto transactions to ensure regulatory compliance.”

The deal was originally signed September 2021, but the contract was only now obtained by watchdog group Tech Inquiry. The deal was made for a maximum amount of $1.37 million, and we knew at the time that this was a three year contract for Coinbase’s analytic software. The now revealed contract allows us to look more into what this deal entails.

This deal will allow ICE to track transactions made through twelve different currencies, including Ethereum, Tether, and Bitcoin. Other features include “Transaction demixing and shielded transaction analysis,” which appears to be aimed at preventing users from laundering funds or hiding transactions. Another feature is the ability to “Multi-hop link analysis for incoming and outgoing funds” which would give ICE insight into the transfer of the currencies. The most mysterious one is access to “historical geo tracking data,” and ICE gave a little insight into how this tool may be used.

[…]

Source: Coinbase Is Selling Data on Crypto and ‘Geotracking’ to ICE

Google to pay $90m to settle Play Store lawsuit

Google is to pay $90 million to settle a class-action lawsuit with US developers over alleged anti-competitive behavior regarding the Google Play Store.

Eligible for a share in the $90 million fund are US developers who earned two million dollars or less in annual revenue through Google Play between 2016 and 2021. “A vast majority of US developers who earned revenue through Google Play will be eligible to receive money from this fund,” said Google.

Law firm Hagens Berman announced the settlement this morning, having been one of the first to file a class case. The legal firm was one of four that secured a $100 million settlement from Apple in 2021 for US iOS developers.

The accusations that will be settled are depressing familiar – attorneys had alleged that Google excluded competing app stores from its platform and that the search giant charged app developers eye-watering fees.

Google said it “and a group of US developers have reached a proposed settlement that allows both parties to move forward and avoids years of uncertain and distracting litigation.”

If the court gives the go-ahead, developers that qualify will be notified.

As well as the settlement [PDF], Google has promised changes to Android 12 to make it easier for other app stores to be used on devices and to revise its Developer Distribution Agreement to clarify that developers can use contact information obtained in-app to direct users to offers on a rival app store or the developer’s own site.

The lawsuit goes back to 2020, when Hagens Berman and Sperling & Slater filed in the US District Court for the Northern District of California. Back then, much was made of a default 30 percent commission levied by Google on Play Store app purchases and in-app transactions. Google currently has a tiered model, implemented in 2021, where the first $1 million in annual revenue was subject to a reduced 15 per cent, but it appears this has been insufficient to keep the lawyers at bay.

Source: Google to pay $90m to settle Play Store lawsuit • The Register

Open source Fundamentalists SFC quit GitHub, want you to follow – because GitHub charges for Copilot feature

The Software Freedom Conservancy (SFC), a non-profit focused on free and open source software (FOSS), said it has stopped using Microsoft’s GitHub for project hosting – and is urging other software developers to do the same.

In a blog post on Thursday, Denver Gingerich, SFC FOSS license compliance engineer, and Bradley M. Kuhn, SFC policy fellow, said GitHub has over the past decade come to play a dominant role in FOSS development by building an interface and social features around Git, the widely used open source version control software.

In so doing, they claim, the company has convinced FOSS developers to contribute to the development of a proprietary service that exploits FOSS.

“We are ending all our own uses of GitHub, and announcing a long-term plan to assist FOSS projects to migrate away from GitHub,” said Gingerich and Kuhn.

We will no longer accept new member projects that do not have a long-term plan to migrate away from GitHub

The SFC mostly uses self-hosted Git repositories, they say, but the organization did use GitHub to mirror its repos.

The SFC has added a Give Up on GitHub section to its website and is asking FOSS developers to voluntarily switch to a different code hosting service.

[…]
For the SFC, the break with GitHub was precipitated by the general availability of GitHub Copilot, an AI coding assistant tool. GitHub’s decision to release a for-profit product derived from FOSS code, the SFC said, is “too much to bear.”

Copilot, based on OpenAI’s Codex, suggests code and functions to developers as they’re working. It’s able to do so because it was trained “on natural language text and source code from publicly available sources, including code in public repositories on GitHub,” according to GitHub.

[…]

Gingerich and Kuhn see that as a problem because Microsoft and GitHub have failed to provide answers about the copyright ramifications of training its AI system on public code, about why Copilot was trained on FOSS code but not copyrighted Windows code, and whether the company can specify all the software licenses and copyright holders attached to code used in the training data set.

Kuhn has written previously about his concerns that Copilot’s training may present legal risks and others have raised similar concerns. Last week, Matthew Butterick, a designer, programmer, and attorney, published a blog post stating that he agrees with those who argue that Copilot is an engine for violating open-source licenses.

“Copilot completely severs the connection between its inputs (= code under various open-source licenses) and its outputs (= code algo­rith­mi­cally produced by Copilot),” he wrote. “Thus, after 20+ years, Microsoft has finally produced the very thing it falsely accused open source of being: a black hole of IP rights.”

Such claims have not been settled and likely won’t be until there’s actual litigation and judgment. Other lawyers note that GitHub’s Terms of Service give it the right to use hosted code to improve the service. And certainly legal experts at Microsoft and GitHub believe they’re off the hook for license compliance, which they pass on to those using Copilot to generate code.

[…]

Source: Open source body quits GitHub, urges you to do the same • The Register

Copyright people are the bringers of slow death by horrible boredom. How they must have been pestered as little kids.

New Firefox privacy feature strips URLs of tracking parameters

Numerous companies, including Facebook, Marketo, Olytics, and HubSpot, utilize custom URL query parameters to track clicks on links.

For example, Facebook appends a fbclid query parameter to outbound links to track clicks, with an example of one of these URLs shown below.

https://www.example.com/?fbclid=IwAR4HesRZLT-fxhhh3nZ7WKsOpaiFzsg4nH0K4WLRHw1h467GdRjaLilWbLs

With the release of Firefox 102, Mozilla has added the new ‘Query Parameter Stripping’ feature that automatically strips various query parameters used for tracking from URLs when you open them, whether that be by clicking on a link or simply pasting the URL into the address bar.

Once enabled, Mozilla Firefox will now strip the following tracking parameters from URLs when you click on links or paste an URL into the address bar:

  • Olytics: oly_enc_id=, oly_anon_id=
  • Drip: __s=
  • Vero: vero_id=
  • HubSpot: _hsenc=
  • Marketo: mkt_tok=
  • Facebook: fbclid=, mc_eid=

[…]

To enable Query Parameter Stripping, go into the Firefox Settings, click on Privacy & Security, and then change ‘Enhanced Tracking Protection’ to ‘Strict.’

Mozilla Firefox's Enhanced Tracking Protection set to Strict
Mozilla Firefox’s Enhanced Tracking Protection set to Strict
Source: BleepingComputer

However, these tracking parameters will not be stripped in Private Mode even with Strict mode enabled.

To also enable the feature in Private Mode, enter about:config in the address bar, search for strip, and set the ‘privacy.query_stripping.enabled.pbmode‘ option to true, as shown below.

Enable privacy.query_stripping.enabled.pbmode setting
Enable privacy.query_stripping.enabled.pbmode setting
Source: BleepingComputer

It should be noted that setting Enhanced Tracking Protection to Strict could cause issues when using particular sites.

If you enable this feature and find that sites are not working correctly, just set it back to Standard (disables this feature) or the Custom setting, which will require some tweaking.

Source: New Firefox privacy feature strips URLs of tracking parameters

Spain, Austria not convinced location data is personal

[…]

EU privacy group NOYB (None of your business), set up by privacy warrior Max “Angry Austrian” Schrems, said on Tuesday it appealed a decision of the Spanish Data Protection Authority (AEPD) to support Virgin Telco’s refusal to provide the location data it has stored about a customer.

In Spain, according to NOYB, the government still requires telcos to record the metadata of phone calls, text messages, and cell tower connections, despite Court of Justice (CJEU) decisions that prohibit data retention.

A Spanish customer demanded that Virgin reveal his personal data, as allowed under the GDPR. Article 15 of the GDPR guarantees individuals the right to obtain their personal data from companies that process and store it.

[…]

Virgin, however, refused to provide the customer’s location data when a complaint was filed in December 2021, arguing that only law enforcement authorities may demand that information. And the AEPD sided with the company.

NOYB says that Virgin Telco failed to explain why Article 15 should not apply since the law contains no such limitation.

“The fundamental right to access is comprehensive and clear: users are entitled to know what data a company collects and processes about them – including location data,” argued Felix Mikolasch, a data protection attorney at NOYB, in a statement. “This is independent from the right of authorities to access such data. In this case, there is no relevant exception from the right to access.”

[…]

The group said it filed a similar appeal last November in Austria, where that country’s data protection authority similarly supported Austrian mobile provider A1’s refusal to turn over customer location data. In that case, A1’s argument was that location data should not be considered personal data because someone else could have used the subscriber phone that generated it.

[…]

Location data is potentially worth billions. According to Fortune Business Insights, the location analytics market is expected to bring in $15.76 billion in 2022 and $43.97 billion by 2029.

Outside the EU, the problem is the availability of location data, rather than lack of access. In the US, where there’s no federal data protection framework, the government is a major buyer of location data – it’s more convenient than getting a warrant.

And companies that can obtain location data, often through mobile app SDKs, appear keen to monetize it.

In 2020, the FCC fined the four largest wireless carriers in the US for failing to protect customer location data in accordance with a 2018 commitment to do so.

Source: Spain, Austria not convinced location data is personal • The Register

Chinese Officials Are Weaponizing COVID Health Tracker to Block Protests

Chinese bank depositors planning a protest about their frozen funds saw their health code mysteriously turn red and were stopped from traveling to the site of a rally, confirming fears that China’s vast COVID-tracking system could be weaponized as a powerful tool to stifle dissent.

A red health code designated the would-be protesters as suspected or confirmed COVID-19 patients, limiting their movement and access to public transportation. Their rallies in the central Henan province this week were thwarted as some were forced into quarantine and others detained by police.

A 38-year-old software engineer was among hundreds who could not access their savings at four rural banks since mid-April. She had planned to travel from her home in Jiangxi province to Zhengzhou, Henan’s capital city, to join a group petition this week to demand her money back. But her health code turned from green to red shortly after she bought a train ticket on Sunday. She said a nucleic test for COVID she took the night before came back negative and her hometown has not reported any infection recently.

[…]

Source: Chinese Officials Are Weaponizing COVID Health Tracker to Block Protests

Facebook and Anti-Abortion Clinics Are Collecting Highly Sensitive Info on Would-Be Patients

Facebook is collecting ultra-sensitive personal data about abortion seekers and enabling anti-abortion organizations to use that data as a tool to target and influence people online, in violation of its own policies and promises.

In the wake of a leaked Supreme Court opinion signaling the likely end of nationwide abortion protections, privacy experts are sounding alarms about all the ways people’s data trails could be used against them if some states criminalize abortion.

A joint investigation by Reveal from The Center for Investigative Reporting and The Markup found that the world’s largest social media platform is already collecting data about people who visit the websites of hundreds of crisis pregnancy centers, which are quasi-health clinics, mostly run by religiously aligned organizations whose mission is to persuade people to choose an option other than abortion.

[…]

Reveal and The Markup have found Facebook’s code on the websites of hundreds of anti-abortion clinics. Using Blacklight, a Markup tool that detects cookies, keyloggers and other types of user-tracking technology on websites, Reveal analyzed the sites of nearly 2,500 crisis pregnancy centers – with data provided by the University of Georgia – and found that at least 294 shared visitor information with Facebook. In many cases, the information was extremely sensitive – for example, whether a person was considering abortion or looking to get a pregnancy test or emergency contraceptives.

[…]

Source: Facebook and Anti-Abortion Clinics Are Collecting Highly Sensitive Info on Would-Be Patients – Reveal

Telegram criticizes Apple for subpar web app features on iOS, crippling app

A week after confirming plans for Telegram Premium, the messaging platform’s CEO, Pavel Durov, is again criticizing Apple’s approach to its Safari browser for stifling the efforts of web developers.

Durov would very much like his web-based messaging platform, Telegram Web, to be delivered as a web app rather than native, but is prevented from offering users a full-fat experience on Apple’s mobile devices due to limitations in the iOS Safari browser.

There’s no option for web developers on Apple’s iPhone and iPad to use anything but Safari, and features taken for granted on other platforms have yet to make it to iOS.

“We suspect that Apple may be intentionally crippling its web apps,” claimed Durov, “to force its users to download more native apps where Apple is able to charge its 30 percent commission.”

[…]

Source: Telegram criticizes Apple for subpar web app features on iOS • The Register

Julian Assange Extradition to US Approved by UK Government

Julian Assange—founder of the whistleblowing website WikiLeaks—can now be extradited from the United Kingdom to the United States, where he will face charges of espionage.

In April, a London court filed a formal extradition order for Assange, and the UK Home Secretary approved the order today, meaning that Assange can be extradited back to the United States. According to CNBC , Assange is facing 18 charges of espionage for his involvement with WikiLeaks, the website that published hundreds of thousands of classified military documents in 2010 and 2011.

Assange has been in prison or the Ecuadorian Embassy in London for much of the last decade. He’s currently being held in a high-security prison in London. Assange has the right to appeal today’s decision within 14 days, and WikiLeaks indicated it would be doing just that in a statement posted on Twitter this morning.

“This is a dark day for press freedom and for British democracy,” WikiLeaks said. “Julian did nothing wrong. He has committed no crime and is not a criminal. He is a journalist and a publisher, and he is being punished for doing his job.”

[…]

Source: Julian Assange Extradition to US Approved by UK Government

Testing firm Cignpost can profit from sale of Covid swabs with customer DNA

A large Covid-19 testing provider is being investigated by the UK’s data privacy watchdog over its plans to sell swabs containing customers’ DNA for medical research.

Source: Testing firm can profit from sale of Covid swabs | News | The Sunday Times

Find you: an airtag which Apple can’t find in unwanted tracking

[…]

In one exemplary stalking case, a fashion and fitness model discovered an AirTag in her coat pocket after having received a tracking warning notification from her iPhone. Other times, AirTags were placed in expensive cars or motorbikes to track them from parking spots to their owner’s home, where they were then stolen.

On February 10, Apple addressed this by publishing a news statement titled “An update on AirTag and unwanted tracking” in which they describe the way they are currently trying to prevent AirTags and the Find My network from being misused and what they have planned for the future.

[…]

Apple needs to incorporate non-genuine AirTags into their threat model, thus implementing security and anti-stalking features into the Find My protocol and ecosystem instead of in the AirTag itself, which can run modified firmware or not be an AirTag at all (Apple devices currently have no way to distinguish genuine AirTags from clones via Bluetooth).

The source code used for the experiment can be found here.

Edit: I have been made aware of a research paper titled “Who Tracks the Trackers?” (from November 2021) that also discusses this idea and includes more experiments. Make sure to check it out as well if you’re interested in the topic!

[…]

Fan’s Rare Recordings Of Lost 1963 Beatles’ Performances Can’t Be Heard, Because … Copyright

There’s a story in the Daily Mail that underlines why it is important for people to make copies. It concerns the re-surfacing of rare recordings of the Beatles:

In the summer of 1963, the BBC began a radio series called Pop Go The Beatles which went out at 5pm on Tuesdays on the Light Programme.

Each show featured the Beatles performing six or seven songs, recorded in advance but as live, in other words with no or minimal post-production.

The BBC had not thought it worth keeping the original recordings, even though they consisted of rarely heard material – mostly covers of old rock ‘n’ roll numbers. Fortunately, a young fan of the Beatles, Margaret Ashworth, used her father’s modified radio connected directly to a reel-to-reel tape recorder to make recordings of the radio shows, which meant they were almost of broadcast quality.

When the recording company EMI was putting together an album of material performed by the Beatles for the BBC, it was able to draw on these high-quality recordings, some of which were much better than the other surviving copies. In this case, it was just chance that Margaret Ashworth had made the tapes. The general message is that people shouldn’t do this, because “copyright”. There are other cases where historic cultural material would have been lost had people not made copies, regardless of what copyright law might say.

Margaret Ashworth thought it would be fun to put out the old programmes she had recorded on a Web site, for free, recreating the weekly schedules she had heard back in the 1960s. So she contacted the BBC for permission, but was told it would “not approve” the upload of her recordings to the Internet. As she writes:

after all these years, with the Beatles still extremely popular, it seems mean-spirited of the BBC not to allow these little time capsules to be broadcast, either by me or by the Corporation. I cannot believe there are copyright issues that cannot be solved.

Readers of this blog probably can.

Source: Fan’s Rare Recordings Of Lost Beatles’ Performances Can’t Be Heard, Because Copyright Ruins Everything | Techdirt

Now Amazon to put creepy AI cameras in UK delivery vans

Amazon is installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

The technology was first deployed, with numerous errors that reportedly denied drivers’ bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers’ driving behavior for safety reasons. The same system is now being rolled out to vehicles in the UK.

Multiple cameras are placed under the front mirror. One is directed at the person behind the wheel, one faces the road, and two are located on either side to provide a wider view. The cameras do not record constant video, and are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what’s going on in and around the vehicle. Delivery drivers can also activate the cameras to record footage if they want to, such as if someone’s trying to rob them or run them off the road. There is no microphone, for what it’s worth.

Audio alerts are triggered by some behaviors, such as if a driver fails to brake at a stop sign or is driving too fast. Other actions are silently logged, such as if the driver doesn’t wear a seat-belt or if a camera’s view is blocked. Amazon, reportedly in the US at least, records workers and calculates from their activities a score that affects their pay; drivers have previously complained of having bonuses unfairly deducted for behavior the computer system wrongly classified as reckless.

[…]

Source: Now Amazon to put ‘creepy’ AI cameras in UK delivery vans • The Register

Twitter fined $150 million after selling 2FA phone numbers + email addresses to targeting advertisers

Twitter has agreed to pay a $150 million fine after federal law enforcement officials accused the social media company of illegally using peoples’ personal data over six years to help sell targeted advertisements.

In court documents made public on Wednesday, the Federal Trade Commission and the Department of Justice say Twitter violated a 2011 agreement with regulators in which the company vowed to not use information gathered for security purposes, like users’ phone numbers and email addresses, to help advertisers target people with ads.

Federal investigators say Twitter broke that promise.

“As the complaint notes, Twitter obtained data from users on the pretext of harnessing it for security purposes but then ended up also using the data to target users with ads,” said FTC Chair Lina Khan.

Twitter requires users to provide a telephone number and email address to authenticate accounts. That information also helps people reset their passwords and unlock their accounts when the company blocks logging in due to suspicious activity.

But until at least September 2019, Twitter was also using that information to boost its advertising business by allowing advertisers access to users’ phone numbers and email addresses. That ran afoul of the agreement the company had with regulators.

[…]

Source: Twitter will pay a $150 million fine over accusations it improperly sold user data : NPR

Clearview AI Ordered to Purge U.K. Face Scans, Pay GBP 7.5m Fine

The United Kingdom has had it with creepy facial recognition firm Clearview AI. Under a new enforcement rule from the U.K.’s Information Commissioner’s office, Clearview must cease the collection and use of publicly available U.K. data and delete all data of U.K. residents from their database. The order, which will also require the company to pay a £7,552,800 ($9,507,276) fine, effectively calls on Clearview to purge U.K. residents from its massive face database reportedly consisting of over 20 billion images scrapped from publicly available social media sites.

The ICO ruling which determined Clearview violated U.K. privacy laws, comes on the heels of a multi-year joint investigation with the Australian Information Commissioner. According to the ICO ruling, Clearview failed to use U.K. resident data in a way that was fair and transparent and failed to provide a lawful reason for collecting the data in the first place. Clearview also failed, the ICO notes, to put in place measures to stop U.K resident data from having their data collected indefinitely and supposedly didn’t meet higher data protection standards outlined in the EU’s General Data Protection Regulation.

[…]

Source: Clearview AI Ordered to Purge U.K. Face Scans, Pay Fine

Your data’s auctioned off up to 987 times a day, NGO reports

The average American has their personal information shared in an online ad bidding war 747 times a day. For the average EU citizen, that number is 376 times a day. In one year, 178 trillion instances of the same bidding war happen online in the US and EU.

That’s according to data shared by the Irish Council on Civil Liberties in a report detailing the extent of real-time bidding (RTB), the technology that drives almost all online advertising and which it said relies on sharing of personal information without user consent.

The RTB industry was worth more than $117 billion last year, the ICCL report said. As with all things in its study, those numbers only apply to the US and Europe, which means the actual value of the market is likely much higher.

Real-time bidding involves the sharing of information about internet users, and it happens whenever a user lands on a website that serves ads. Information shared with advertisers can include nearly anything that would help them better target ads, and those advertisers bid on the ad space based on the information the ad network provides.

That data can be practically anything based on the Interactive Advertising Bureau’s (IAB) audience taxonomy. The basics, of course, like age, sex, location, income and the like are included, but it doesn’t stop there. All sorts of websites fingerprint their visitors – even charities treating mental health conditions – and those fingerprints can later be used to target ads on unrelated websites.

Google owns the largest ad network that was included in the ICCL’s report, and it alone offers RTB data to 4,698 companies in just the US. Other large advertising networks include Xandr, owned by Microsoft since late 2021, Verizon, PubMatic and more.

Not included in ICCL’s report are Amazon or Facebook’s RTB networks, as the industry figures it used for its report don’t include their ad networks. Along with only surveying part of the world that likely means that the scope of the RTB industry is, again, much larger.

Also, it’s probably illegal

The ICCL describes RTB as “the biggest data breach ever recorded,” but even that may be giving advertisers too much credit: Calling freely-broadcast RTB data a breach implies action was taken to bypass defenses, of which there aren’t any.

So, is RTB violating any laws at all? Yes, claims Gartner Privacy Research VP Nader Henein. He told The Register that the adtech industry justifies its use of RTB under the “legitimate interest” provision of the EU’s General Data Protection Regulation (GDR).

“Multiple regulators have rejected that assessment, so the answer would be ‘yes,’ it is a violation [of the GDPR],” Henein opined.

As far back as 2019, Google and other adtech giants were accused by the UK of knowingly breaking the law by using RTB, a case it continues to investigate. Earlier this year, the Belgian data protect authority ruled that RTB practices violated the GDPR and required organizations working with the IAB to delete all the data collected through the use of TC strings, a type of coded character used in the RTB process.

[…]

Source: Privacy. Ad bidders haven’t heard of it, report reveals

New EU rules would require chat apps to scan private messages for child abuse

The European Commission has proposed controversial new regulation that would require chat apps like WhatsApp and Facebook Messenger to selectively scan users’ private messages for child sexual abuse material (CSAM) and “grooming” behavior. The proposal is similar to plans mooted by Apple last year but, say critics, much more invasive.

After a draft of the regulation leaked earlier this week, privacy experts condemned it in the strongest terms. “This document is the most terrifying thing I’ve ever seen,” tweeted cryptography professor Matthew Green. “It describes the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR. Not an exaggeration.”

Jan Penfrat of digital advocacy group European Digital Rights (EDRi) echoed the concern, saying, “This looks like a shameful general #surveillance law entirely unfitting for any free democracy.” (A comparison of the PDFs shows differences between the leaked draft and final proposal are cosmetic only.)

The regulation would establish a number of new obligations for “online service providers” — a broad category that includes app stores, hosting companies, and any provider of “interpersonal communications service.”

The most extreme obligations would apply to communications services like WhatsApp, Signal, and Facebook Messenger. If a company in this group receives a “detection order” from the EU they would be required to scan select users’ messages to look for known child sexual abuse material as well as previously unseen CSAM and any messages that may constitute “grooming” or the “solicitation of children.” These last two categories of content would require the use of machine vision tools and AI systems to analyze the context of pictures and text messages.

[…]

“The proposal creates the possibility for [the orders] to be targeted but doesn’t require it,” Ella Jakubowska, a policy advisor at EDRi, told The Verge. “It completely leaves the door open for much more generalized surveillance.”

[…]

 

Source: New EU rules would require chat apps to scan private messages for child abuse – The Verge

US secretly issued secret subpoena to access Guardian reporter’s phone records

The US justice department secretly issued a subpoena to gain access to details of the phone account of a Guardian reporter as part of an aggressive leak investigation into media stories about an official inquiry into the Trump administration’s child separation policy at the southern border.

Leak investigators issued the subpoena to obtain the phone number of Stephanie Kirchgaessner, the Guardian’s investigations correspondent in Washington. The move was carried out without notifying the newspaper or its reporter, as part of an attempt to ferret out the source of media articles about a review into family separation conducted by the Department of Justice’s inspector general, Michael Horowitz.

It is highly unusual for US government officials to obtain a journalist’s phone details in this way, especially when no national security or classified information is involved. The move was all the more surprising in that it came from the DoJ’s inspector general’s office – the watchdog responsible for ethical oversight and whistleblower protections.

Katharine Viner, the Guardian’s editor-in-chief, decried the action as “an egregious example of infringement on press freedom and public interest journalism by the US Department of Justice”.

[…]

Source: US secretly issued subpoena to access Guardian reporter’s phone records | US news | The Guardian

Web ad firms scrape email addresses before you press the submit button

Tracking, marketing, and analytics firms have been exfiltrating the email addresses of internet users from web forms prior to submission and without user consent, according to security researchers.

Some of these firms are said to have also inadvertently grabbed passwords from these forms.

In a research paper scheduled to appear at the Usenix ’22 security conference later this year, authors Asuman Senol (imec-COSIC, KU Leuven), Gunes Acar (Radboud University), Mathias Humbert (University of Lausanne) and Frederik Zuiderveen Borgesius, (Radboud University) describe how they measured data handling in web forms on the top 100,000 websites, as ranked by research site Tranco.

The boffins created their own software to measure email and password data gathering from web forms – structured web input boxes through which site visitors can enter data and submit it to a local or remote application.

Providing information through a web form by pressing the submit button generally indicates the user has consented to provide that information for a specific purpose. But web pages, because they run JavaScript code, can be programmed to respond to events prior to a user pressing a form’s submit button.

And many companies involved in data gathering and advertising appear to believe that they’re entitled to grab the information website visitors enter into forms with scripts before the submit button has been pressed.

[…]

“Furthermore, we find incidental password collection on 52 websites by third-party session replay scripts,” the researchers say.

Replay scripts are designed to record keystrokes, mouse movements, scrolling behavior, other forms of interaction, and webpage contents in order to send that data to marketing firms for analysis. In an adversarial context, they’d be called keyloggers or malware; but in the context of advertising, somehow it’s just session-replay scripts.

[…]

Source: Web ad firms scrape email addresses before you know it • The Register

Indian Government Now Wants VPNs To Collect And Turn Over Personal Data On Users

The government of India still claims to be a democracy, but its decade-long assault on the internet and the rights of its citizens suggests it would rather be an autocracy.

The country is already host to one of the largest biometric databases in the world, housing information collected from nearly every one of its 1.2 billion citizens. And it’s going to be expanded, adding even more biometric markers from people arrested and detained.

The government has passed laws shifting liability for third-party content to service providers, as well as requiring them to provide 24/7 assistance to the Indian government for the purpose of removing “illegal” content. Then there are mandates on compelled access — something that would require broken/backdoored encryption. (The Indian government — like others demanding encryption backdoors — refuses to acknowledge this is what it’s seeking.)

In the name of cybersecurity, the Indian government is now seeking to further undermine the privacy of its citizens.

[…]

The new directions issued by CERT-In also require virtual asset, exchange, and custodian wallet providers to maintain records on KYC and financial transactions for a period of five years. Companies providing cloud, virtual private network (VPN) will also have to register validated names, emails, and IP addresses of subscribers.

Taking the “P” out of “VPN:” that’s the way forward for the Indian government, which has apparently decided to emulate China’s strict control of internet use. And it’s yet another way the Indian government is stripping citizens of their privacy and anonymity. The government of India wants to know everything about its constituents while remaining vague and opaque about its own actions and goals.

Source: Indian Government Now Wants VPNs To Collect And Turn Over Personal Data On Users | Techdirt

Hackers are reportedly using emergency data requests to extort women and minors

In response to fraudulent legal requests, companies like Apple, Google, Meta and Twitter have been tricked into sharing sensitive personal information about some of their customers. We knew that was happening as recently as last month when Bloomberg published a report on hackers using fake emergency data requests to carry out financial fraud. But according to a newly published report from the outlet, some malicious individuals are also using the same tactics to target women and minors with the intent of extorting them into sharing sexually explicit images and videos of themselves.

It’s unclear how many fake data requests the tech giants have fielded since they appear to come from legitimate law enforcement agencies. But what makes the requests particularly effective as an extortion tactic is that the victims have no way of protecting themselves other than by not using the services offered by those companies.

[…]

Part of what has allowed the fake requests to slip through is that they abuse how the industry typically handles emergency appeals. Among most tech companies, it’s standard practice to share a limited amount of information with law enforcement in response to “good faith” requests related to situations involving imminent danger.

Typically, the information shared in those instances includes the name of the individual, their IP, email and physical address. That might not seem like much, but it’s usually enough for bad actors to harass, dox or SWAT their target. According to Bloomberg, there have been “multiple instances” of police showing up at the homes and schools of underage women.

[…]

Source: Hackers are reportedly using emergency data requests to extort women and minors | Engadget

Brave’s De-AMP feature bypasses harmful Google AMP pages

Brave announced a new feature for its browser on Tuesday: De-AMP, which automatically jumps past any page rendered with Google’s Accelerated Mobile Pages framework and instead takes users straight to the original website. “Where possible, De-AMP will rewrite links and URLs to prevent users from visiting AMP pages altogether,” Brave said in a blog post. “And in cases where that is not possible, Brave will watch as pages are being fetched and redirect users away from AMP pages before the page is even rendered, preventing AMP / Google code from being loaded and executed.”

Brave framed De-AMP as a privacy feature and didn’t mince words about its stance toward Google’s version of the web. “In practice, AMP is harmful to users and to the Web at large,” Brave’s blog post said, before explaining that AMP gives Google even more knowledge of users’ browsing habits, confuses users, and can often be slower than normal web pages. And it warned that the next version of AMP — so far just called AMP 2.0 — will be even worse.

Brave’s stance is a particularly strong one, but the tide has turned hard against AMP over the last couple of years. Google originally created the framework in order to simplify and speed up mobile websites, and AMP is now managed by a group of open-source contributors. It was controversial from the very beginning and smelled to some like Google trying to exert even more control over the web. Over time, more companies and users grew concerned about that control and chafed at the idea that Google would prioritize AMP pages in search results. Plus, the rest of the internet eventually figured out how to make good mobile sites, which made AMP — and similar projects like Facebook Instant Articles — less important.

A number of popular apps and browser extensions make it easy for users to skip over AMP pages, and in recent years, publishers (including The Verge’s parent company Vox Media) have moved away from using it altogether. AMP has even become part of the antitrust fight against Google: a lawsuit alleged that AMP helped centralize Google’s power as an ad exchange and that Google made non-AMP ads load slower.

[…]

Source: Brave’s De-AMP feature bypasses ‘harmful’ Google AMP pages – The Verge

Boris Johnson, Catalan Activists Hit With NSO Spyware: Report

Spyware manufactured by the NSO Group has been used to hack droves of high-profile European politicians and activists, The New Yorker reports. Devices associated with the British Foreign Office and the office of British Prime Minister Boris Johnson are allegedly among the targeted, as well as the phones of dozens of members of the Catalan independence movement.

The magazine’s report is partially based on a recently published analysis by Citizen Lab, a digital research unit with the University of Toronto that has been at the forefront of research into the spyware industry’s shadier side.

Citizen Lab researchers told The New Yorker that mobile devices connected to the British Foreign Office were hacked with Pegasus five times between July 2020 and June 2021. A phone connected to the office of 10 Downing Street, where British Prime Minister Boris Johnson works, was reportedly hacked using the malware on July 7, 2020. British government officials confirmed to the New Yorker that the offices appeared to have been targeted, while declining to specify NSO’s involvement.

Citizen Lab researchers also told The New Yorker that the United Arab Emirates is suspected to be behind the spyware attacks on 10 Downing Street. The UAE has been accused of being involved in a number of other high-profile hacking incidents involving Pegasus spyware.

[…]

Source: Boris Johnson, Catalan Activists Hit With NSO Spyware: Report

Cisco’s Webex phoned home audio telemetry even when muted

Boffins at two US universities have found that muting popular native video-conferencing apps fails to disable device microphones – and that these apps have the ability to access audio data when muted, or actually do so.

The research is described in a paper titled, “Are You Really Muted?: A Privacy Analysis of Mute Buttons in Video Conferencing Apps,” [PDF] by Yucheng Yang (University of Wisconsin-Madison), Jack West (Loyola University Chicago), George K. Thiruvathukal (Loyola University Chicago), Neil Klingensmith (Loyola University Chicago), and Kassem Fawaz (University of Wisconsin-Madison).

The paper is scheduled to be presented at the Privacy Enhancing Technologies Symposium in July.

[…]

Among the apps studied – Zoom (Enterprise), Slack, Microsoft Teams/Skype, Cisco Webex, Google Meet, BlueJeans, WhereBy, GoToMeeting, Jitsi Meet, and Discord – most presented only limited or theoretical privacy concerns.

The researchers found that all of these apps had the ability to capture audio when the mic is muted but most did not take advantage of this capability. One, however, was found to be taking measurements from audio signals even when the mic was supposedly off.

“We discovered that all of the apps in our study could actively query (i.e., retrieve raw audio) the microphone when the user is muted,” the paper says. “Interestingly, in both Windows and macOS, we found that Cisco Webex queries the microphone regardless of the status of the mute button.”

They found that Webex, every minute or so, sends network packets “containing audio-derived telemetry data to its servers, even when the microphone was muted.”

[…]

Worse still from a security standpoint, while other apps encrypted their outgoing data stream before sending it to the operating system’s socket interface, Webex did not.

“Only in Webex were we able to intercept plaintext immediately before it is passed to the Windows network socket API,” the paper says, noting that the app’s monitoring behavior is inconsistent with the Webex privacy policy.

The app’s privacy policy states Cisco Webex Meetings does not “monitor or interfere with you your [sic] meeting traffic or content.”

[…]

Source: Cisco’s Webex phoned home audio telemetry even when muted • The Register