UK Schools Normalizing Biometric Collection By Using Facial Recognition For Meal Payments

Subjecting students to surveillance tech is nothing new. Most schools have had cameras installed for years. Moving students from desks to laptops allows schools to monitor internet use, even when students aren’t on campus. Bringing police officers into schools to participate in disciplinary problems allows law enforcement agencies to utilize the same tech and analytics they deploy against the public at large. And if cameras are already in place, it’s often trivial to add facial recognition features.

The same tech that can keep kids from patronizing certain retailers is also being used to keep deadbeat kids from scoring free lunches. While some local governments in the United States are trying to limit the expansion of surveillance tech in their own jurisdictions, governments in the United Kingdom seem less concerned about the mission creep of surveillance technology.

Some students in the UK are now able to pay for their lunch in the school canteen using only their faces. Nine schools in North Ayrshire, Scotland, started taking payments using biometric information gleaned from facial recognition systems on Monday, according to the Financial Times. [alt link]

The technology is being provided by CRB Cunningham, which has installed a system that scans the faces of students and cross-checks them against encrypted faceprint templates stored locally on servers in the schools. It’s being brought in to replace fingerprint scanning and card payments, which have been deemed less safe since the advent of the COVID-19 pandemic.

According to the Financial Times report, 65 schools have already signed up to participate in this program, which has supposedly dropped transaction times at the lunchroom register to less than five seconds per student. I assume that’s an improvement, but it seems fingerprints/cards weren’t all that slow and there are plenty of options for touchless payment if schools need somewhere to spend their cafeteria tech money.

CRB says more than 97% of parents have consented to the collection and use of their children’s biometric info to… um… move kids through the lunch line faster. I guess the sooner you get kids used to having their faces scanned to do mundane things, the less likely they’ll be to complain when demands for info cross over into more private spaces.

The FAQ on the program makes it clear it’s a single-purpose collection governed by a number of laws and data collection policies. Parents can opt out at any time and all data is deleted after opt out or if the student leaves the school. It’s good this is being handled responsibly but, like all facial recognition tech, mistakes can (and will) be made. When these inevitably occur, hopefully the damage will be limited to a missed meal.

The FAQ handles questions specifically about this program. The other flyer published by the North Ayrshire Council explains nothing and implies facial recognition is harmless, accurate, and a positive addition to students’ lives.

We’re introducing Facial Recognition!

This new technology is now available for a contactless meal service!

Following this exciting announcement, the flyer moves on to discussing biometric collections and the tech that makes it all possible. It accomplishes this in seven short “land of contrasts” paragraphs that explain almost nothing and completely ignore the inherent flaws in these systems as well as the collateral damage misidentification can cause.

The section titled “The history of biometrics” contains no history. Instead, it says biometric collections are already omnipresent so why worry about paying for lunch with your face?

Whilst the use of biometric recognition has been steadily growing over the last decade or so, these past couple of years have seen an explosion in development, interest and vendor involvement, particularly in mobile devices where they are commonly used to verify the owner of the device before unlocking or making purchases.

If students want to learn more (or anything) about the history of biometrics, I guess they’ll need to do their own research. Because this is the next (and final) paragraph of the “history of biometrics” section:

We are delighted to offer this fast and secure identification technology to purchase our delicious and nutritious school meals

Time is a flattened circle, I guess. The history of biometrics is the present. And the present is the future of student payment options, of which there are several. But these schools have put their money on facial recognition, which will help them raise a generation of children who’ve never known a life where they weren’t expected to use their bodies to pay for stuff.

Source: UK Schools Normalizing Biometric Collection By Using Facial Recognition For Meal Payments | Techdirt

Nintendo Killed Emulation Sites Then Released Garbage N64 Games For The Switch

[…]

You will recall that a couple of years back, Nintendo opened up a new front on its constant IP wars by going after ROM and emulation sites. That caused plenty of sites to simply shut themselves down, but Nintendo also made a point of getting some scalps to hang on its belt, most famously in the form of RomUniverse. That site, which very clearly had infringing material not only on the site but promoted by the site’s ownership, got slapped around in the courts to the tune of a huge judgement against, which the site owners simply cannot pay.

But all of those are details and don’t answer the real question: why did Nintendo do this? Well, as many expected from the beginning, it did this because the company was planning to release a series of classic consoles, namely the NES mini and SNES mini. But, of course, what about later consoles? Such as the Nintendo 64?

Well, the answer to that is that Nintendo has offered a Nintendo Switch Online service uplift that includes some N64 games that you can play there instead.

After years of “N64 mini” rumors (which have yet to come to fruition), Nintendo announced plans to honor its first fully 3D gaming system late last month in the form of the Nintendo Switch Online Expansion Pack. Pay a bit extra, the company said, and you’d get a select library of N64 classics, emulated by the company that made them, on Switch consoles as part of an active NSO subscription.

One month later, however, Nintendo’s sales proposition grew more sour. That “bit extra” ballooned to $30 more per year, on top of the existing $20/year fee—a 150 percent jump in annual price. Never mind that the price also included an Animal Crossing expansion pack (which retro gaming fans may not want) and Sega Genesis games (which have been mostly released ad nauseam on every gaming system of the past decade). For many interested fans, that price jump was about the N64 collection.

So, a bit of a big price tag and a bunch of extras that are mostly besides the point from the perspective of the buyer. Buy, hey, at least Nintendo fans will finally get some N64 games to play on their Switch consoles, right?

Well, it turns out that Nintendo’s offering cannot come close to matching the quality of the very emulators and ROMs that Nintendo has worked so hard to disappear. The Ars Technica post linked above goes into excruciating details, some of which we’ll discuss for the purpose of giving examples, but here are the categories that Nintendo’s product does worse than an emulator on a PC.

 

  • Game options, such as visual settings for resolution to fit modern screens
  • Visuals, such as N64’s famous blur settings, and visual changes that expose outdated graphical sprites
  • Controller input lag
  • Controller configuration options
  • Multiplayer lag/stutter

 

If that seems like a lot of problems compared with emulators that have been around for quite a while, well, ding ding ding! We’ll get into some examples briefly below, but I’ll stipulate that none of the issues in the categories above are incredibly bad. But there are so many of them that they all add up to bad!

[….]

Source: Nintendo Killed Emulation Sites Then Released Garbage N64 Games For The Switch | Techdirt

NFI decrypts Tesla’s hidden driving data

[…] The Netherlands Forensic Institute (NFI) said it discovered a wealth of information about Tesla’s Autopilot, along with data around speed, accelerator pedal positions, steering wheel angle and more. The findings will allow the government to “request more targeted data” to help determine the cause of accidents, the investigators said.

The researchers already knew that Tesla vehicles encrypt and store accident related data, but not which data and how much. As such, they reverse-engineered the system and succeeded in “obtaining data from the models S, Y, X and 3,” which they described in a paper presented at an accident analysis conference.

[….]

With knowledge of how to decrypt the storage, the NFI carried out tests with a Tesla Model S so it could compare the logs with real-world data. It found that the vehicle logs were “very accurate,” with deviations less than 1 km/h (about 0.6 MPH).

[…]

It used to be possible to extract Autopilot data from Tesla EVs, but it’s now encrypted in recent models, the investigators said. Tesla encrypts data for good reason, they acknowledged, including protecting its own IP from other manufacturers and guarding a driver’s privacy. It also noted that the company does provide specific data to authorities and investigators if requested.

However, the team said that the extra data they extracted would allow for more detailed accident investigations, “especially into the role of driver assistance systems.” It added that it would be ideal to know if other manufacturers stored the same level of detail over long periods of time. “If we would know better which data car manufacturers all store, we can also make more targeted claims through the courts or the Public Prosecution Service,” said NFI investigator Frances Hoogendijk. “And ultimately that serves the interest of finding the truth after an accident.”

Source: The Dutch government claims it can decrypt Tesla’s hidden driving data | Engadget

Protecting your IP this way basically means things like not being able to use the data for legitimate reasons – such as investigating accidents – as well as halting advancements. This whole IP thing has gotten way out of hand to the detriment of the human race!

Also, this sounds like non-GDPR compliant data collection

5 notable Facebook fuckups in the recent relevations

The Facebook Papers are based on leaks from former Facebook staffer Frances Haugen and other inside sources. Haugen has appeared before US Congress, British Parliament, and given prominent television interviews. Among the allegations raised are that Facebook:

  • Knows that its algorithms lead users to extreme content and that it employs too few staff or contractors to curb such content, especially in languages other than English. Content glorifying violence and hate therefore spreads on Facebook – which really should know better by now after the The New York Times in 2018 reported that Myanmar’s military used Facebook to spread racist propaganda that led to mass violence against minority groups;
  • Enforces its rules selectively, allowing certain celebrities and websites to get away with behavior that would get others kicked off the platform. Inconsistent enforcement means users don’t get the protection from harmful content Facebook has so often promised, implying that it prioritises finding eyeballs for ads ahead of user safety;
  • Planned a special version of Instagram targeting teenagers, but cancelled it after Haugen revealed the site’s effects on some users – up to three per cent of teenage girls experience depression or anxiety, or self-harm, as a result of using the service;
  • Can’t accurately assess user numbers and may be missing users with multiple accounts. The Social Network™ may therefore have misrepresented its reach to advertisers, or made its advertising look more super-targeted than it really is – or both;
  • Just isn’t very good at spotting the kind of content it says has no place on its platform – like human trafficking – yes, that means selling human beings on Facebook. At one point Apple was so upset by the prevalence of Facebook posts of this sort it threatened to banish Zuckerberg’s software from the App Store.

Outlets including AP News and The Wall Street Journal have more original reporting on the leaks.

Source: Facebook labels recent revelations unfair • The Register

Google deliberately throttled ad load times to promote AMP, locking advertisers into it’s own advertising market place

More detail has emerged from a 173-page complaint filed last week in the lawsuit brought against Google by a number of US states, including allegations that Google deliberately throttled advertisements not served to its AMP (Accelerated Mobile) pages.

The lawsuit – as we explained at the end of last week – was originally filed in December 2020 and concerns alleged anti-competitive practice in digital advertising. The latest document, filed on Friday, makes fresh claims alleging ad-throttling around AMP.

Google introduced AMP in 2015, with the stated purpose of accelerating mobile web pages. An AMP page is a second version of a web page using AMP components and restricted JavaScript, and is usually served via Google’s content delivery network. Until 2018, the AMP project, although open source, had as part of its governance a BDFL (Benevolent Dictator for Life), this being Google’s Malte Ubl, the technical lead for AMP.

In 2018, Ubl posted that this changed “from a single Tech lead to a Technical Steering Committee”. The TSC sets its own membership and has a stated goal of “no more than 1/3 of the TSC from one employer”, though currently has nine members, of whom four are from Google, including operating director Joey Rozier.

According to the Friday court filing, representing the second amended complaint [PDF] from the plaintiffs, “Google ad server employees met with AMP employees to strategize about using AMP to impede header bidding.” Header bidding, as described in our earlier coverage, enabled publishers to offer ad space to multiple ad exchanges, rather than exclusively to Google’s ad exchange. The suit alleges that AMP limited the compatibility with header bidding to just “a few exchanges,” and “routed rival exchange bids through Google’s ad server so that Google could continue to peek at their bids and trade on inside information”.

The lawsuit also states that Google’s claims of faster performance for AMP pages “were not true for publishers that designed their web pages for speed”.

A more serious claim is that: “Google throttles the load time of non-AMP ads by giving them artificial one-second delays in order to give Google AMP a ‘nice comparative boost’. Throttling non-AMP ads slows down header bidding, which Google then uses to denigrate header bidding for being too slow.”

The document goes on to allege that: “Internally, Google employees grappled with ‘how to [publicly] justify [Google] making something slower’.”

Google promoted AMP in part by ranking non-AMP pages below AMP pages in search results, and featuring a “Search AMP Carousel” specifically for AMP content. This presented what the complaint claims was a “Faustian bargain,” where “(1) publishers who used header bidding would see the traffic to their site drop precipitously from Google suppressing their ranking in search and re-directing traffic to AMP-compatible publishers; or (2) publishers could adopt AMP pages to maintain traffic flow but forgo exchange competition in header bidding, which would make them more money on an impression-by-impression basis.”

The complaint further alleges that “According to Google’s internal documents, [publishers made] 40 per cent less revenue on AMP pages.”

A brief history of AMP

AMP was controversial from its first inception. In 2017 developer Jeremy Keith described AMP as deceptive, drawing defensive remarks from Ubl. Keith later joined the AMP advisory committee, but resigned in August saying that “I can’t in good faith continue to advise on the AMP project for the OpenJS Foundation when it has become clear to me that AMP remains a Google product, with only a subset of pieces that could even be considered open source.”

One complaint is that the AMP specification requires a link to Google-hosted JavaScript.

In May 2020 Google stated it would “remove the AMP requirement from Top Stories eligibility”.

This was confirmed in April 2021, when Google posted about an update to its “page experience” whereby “the Top Stories carousel feature on Google Search will be updated to include all news content, as long as it meets the Google News policies. This means that using the AMP format is no longer required.” In addition, “we will no longer show the AMP badge icon to indicate AMP content.” Finally, Google Search signed exchanges, which pre-fetches content to speed page rendering on sites which support the feature, was extended to all web pages where it was previously restricted to AMP pages.

This is evidence that Google is pulling back from its promotion of AMP, though it also said that “Google continues to support AMP”.

As for the complaint, it alleges that Google has an inherent conflict of interest. According to the filing: “Google was able to demand that it represent the buy-side (i.e., advertisers), where it extracted one fee, as well as the sell-side (i.e., publishers), where it extracted a second fee, and it was also able to force transactions to clear in its exchange, where it extracted a third, even larger, fee.”

The company also has more influence than any other on web standards, thanks to the dominant Chrome browser and Chromium browser engine, and on mobile technology, thanks to Android.

That Google would devise a standard from which it benefited is not surprising, but the allegation of deliberately delaying ads on other formats in order to promote it is disturbing and we have asked the company to comment.

Source: Google deliberately throttled ad load times to promote AMP, claims new court document • The Register

Monopolies eh!

UK government hands secret services cloud contract to AWS

The UK’s intelligence services are to store their secret files in the AWS cloud in a deal inked earlier this year, according to reports.

The GCHQ organisation (electrical/radio communications eavesdropping), MI5 (domestic UK intelligence matters), MI6 (external UK intel) and also the Ministry of Defence (MoD) will access their data in the cloud, albeit in UK-located AWS data centres.

The news was first reported in the Financial Times newspaper (paywall), which said GCHQ drove the deal that was signed earlier this year, and the data will be stored in a high-security way. It is claimed by unknown sources that AWS itself will not have access to the data.

Apparently the three agencies plus the MoD will be able to access information faster and share it more quickly when needed. This is presumably in contrast to each agency storing its own information on its own on-premises computer systems.

[…]

The US’s CIA signed a $600m AWS Cloud contract in 2013. That contract was upgraded in 2020 and involved AWS, Google, IBM, Microsoft and Oracle in a consortium.

Of course, for the US, AWS is a domestic firm. The French government is setting up its own sovereign public cloud called Bleu for sensitive government data. This “Cloud de Confiance” will be based on Microsoft’s Azure platform – and will include Microsoft 365 – but will apparently be “delivered via an independent environment” that has “immunity from all extraterritorial legislation and economic independence” from within an “isolated infrastructure that uses data centres located in France.”

In GCHQ’s reported view, no UK-based public cloud could provide the scale or capabilities needed for the security services data storage requirements.

[….]

Source: UK government hands secret services cloud contract to AWS • The Register

Giant, free index to world’s research papers released online

In a project that could unlock the world’s research papers for easier computerized analysis, an American technologist has released online a gigantic index of the words and short phrases contained in more than 100 million journal articles — including many paywalled papers.

The catalogue, which was released on 7 October and is free to use, holds tables of more than 355 billion words and sentence fragments listed next to the articles in which they appear. It is an effort to help scientists use software to glean insights from published work even if they have no legal access to the underlying papers, says its creator, Carl Malamud. He released the files under the auspices of Public Resource, a non-profit corporation in Sebastopol, California, that he founded.

[….]

Computer scientists already text mine papers to build databases of genes, drugs and chemicals found in the literature, and to explore papers’ content faster than a human could read. But they often note that publishers ultimately control the speed and scope of their work, and that scientists are restricted to mining only open-access papers, or those articles they (or their institutions) have subscriptions to. Some publishers have said that researchers looking to mine the text of paywalled papers need their authorization.

And although free search engines such as Google Scholar have — with publishers’ agreement — indexed the text of paywalled literature, they only allow users to search with certain types of text queries, and restrict automated searching. That doesn’t allow large-scale computerized analysis using more specialized searches, Malamud says.

[…]

Michael Carroll, a legal researcher at the American University Washington College of Law in Washington DC, says that distributing the index should be legal worldwide because the files do not copy enough of an underlying article to infringe the publisher’s copyright — although laws vary by country. “Copyright does not protect facts and ideas, and these results would be treated as communication of facts derived from the analysis of the copyrighted articles,” he says.

The only legal question, Carroll adds, is whether Malamud’s obtaining and copying of the underlying papers was done without breaching publishers’ terms. Malamud says that he did have to get copies of the 107 million articles referenced in the index to create it; he declined to say how,

 

Source: Giant, free index to world’s research papers released online

It is sad indeed that much research – lots of it probably paid for by tax payers and all of it eventually subsidised by customers of the companies who paid for it – is impossible or very hard for scientists to look up: because of copyright. This is a clear impediment to growth of wealth and knowledge and it’s not very strange to understand why countries like China who don’t allow people to sit on their copyrighted arses but make them innovate for a living are doing much better at growth than the legally quagmired west.

Location Data Firm Got GPS Data From Apps Even When People Opted Out

Huq, an established data vendor that obtains granular location information from ordinary apps installed on people’s phones and then sells that data, has been receiving GPS coordinates even when people explicitly opted-out of such collection inside individual Android apps, researchers and Motherboard have found.

The news highlights a stark problem for smartphone users: that they can’t actually be sure if some apps are respecting their explicit preferences around data sharing. The data transfer also presents an issue for the location data companies themselves. Many claim to be collecting data with consent, and by extension, in line with privacy regulations. But Huq was seemingly not aware of the issue when contacted by Motherboard for comment, showing that location data firms harvesting and selling his data may not even know whether they are actually getting this data with consent or not.

“This shows an urgent need for regulatory action,” Joel Reardon, assistant professor at the University of Calgary and the forensics lead and co-founder of AppCensus, a company that analyzes apps, and who first flagged some of the issues around Huq to Motherboard, said in an email. “I feel that there’s plenty wrong with the idea that—as long as you say it in your privacy policy—then it’s fine to do things like track millions of people’s every moment and sell it to private companies to do what they want with it. But how do we even start fixing problems like this when it’s going to happen regardless of whether you agree, regardless of any consent whatsoever.”

[…]

Huq does not publicly say which apps it has relationships with. Earlier this year Motherboard started to investigate Huq by compiling a list of apps that contained code related to the company. Some of the apps have been downloaded millions or tens of millions of times, including “SPEEDCHECK,” an internet speed testing app; “Simple weather & clock widget,” a basic weather app; and “Qibla Compass,” a Muslim prayer app.

Independently, Reardon and AppCensus also examined Huq and later shared some of their findings with Motherboard. Reardon said in an email that he downloaded one app called “Network Signal Info” and found that it still sent location and other data to Huq after he opted-out of the app sharing data with third parties.

[…]

Source: Location Data Firm Got GPS Data From Apps Even When People Opted Out

What Else Do the Leaked ‘Facebook Papers’ Show? Angry face emojis have 5x the weight of a like thumb emoji… and more other stuff

The documents leaked to U.S. regulators by a Facebook whistleblower “reveal that the social media giant has privately and meticulously tracked real-world harms exacerbated by its platforms,” reports the Washington Post.

Yet it also reports that at the same time Facebook “ignored warnings from its employees about the risks of their design decisions and exposed vulnerable communities around the world to a cocktail of dangerous content.”

And in addition, the whistleblower also argued that due to Mark Zuckberg’s “unique degree of control” over Facebook, he’s ultimately personally response for what the Post describes as “a litany of societal harms caused by the company’s relentless pursuit of growth.” Zuckerberg testified last year before Congress that the company removes 94 percent of the hate speech it finds before a human reports it. But in internal documents, researchers estimated that the company was removing less than 5 percent of all hate speech on Facebook…

For all Facebook’s troubles in North America, its problems with hate speech and misinformation are dramatically worse in the developing world. Documents show that Facebook has meticulously studied its approach abroad, and is well aware that weaker moderation in non-English-speaking countries leaves the platform vulnerable to abuse by bad actors and authoritarian regimes. According to one 2020 summary, the vast majority of its efforts against misinformation — 84 percent — went toward the United States, the documents show, with just 16 percent going to the “Rest of World,” including India, France and Italy…

Facebook chooses maximum engagement over user safety. Zuckerberg has said the company does not design its products to persuade people to spend more time on them. But dozens of documents suggest the opposite. The company exhaustively studies potential policy changes for their effects on user engagement and other factors key to corporate profits.

Amid this push for user attention, Facebook abandoned or delayed initiatives to reduce misinformation and radicalization… Starting in 2017, Facebook’s algorithm gave emoji reactions like “angry” five times the weight as “likes,” boosting these posts in its users’ feeds. The theory was simple: Posts that prompted lots of reaction emoji tended to keep users more engaged, and keeping users engaged was the key to Facebook’s business. The company’s data scientists eventually confirmed that “angry” reaction, along with “wow” and “haha,” occurred more frequently on “toxic” content and misinformation. Last year, when Facebook finally set the weight on the angry reaction to zero, users began to get less misinformation, less “disturbing” content and less “graphic violence,” company data scientists found.
The Post also contacted a Facebook spokeswoman for their response. The spokewoman denied that Zuckerberg “makes decisions that cause harm” and then also dismissed the findings as being “based on selected documents that are mischaracterized and devoid of any context…”

Responding to the spread of specific pieces of misinformation on Facebook, the spokeswoman went as far to acknowledge that at Facebook, “We have no commercial or moral incentive to do anything other than give the maximum number of people as much of a positive experience as possible.”

She added that the company is “constantly making difficult decisions.”

Source: What Else Do the Leaked ‘Facebook Papers’ Show? – Slashdot

‘A Mistake by YouTube Shows Its Power Over Media’ – and Kafka-esque arbritration rules

“Every hour, YouTube deletes nearly 2,000 channels,” reports the New York Times. “The deletions are meant to keep out spam, misinformation, financial scams, nudity, hate speech and other material that it says violates its policies.

“But the rules are opaque and sometimes arbitrarily enforced,” they write — and sometimes, YouTube does end up making mistakes. (Alternate URL here…) The gatekeeper role leads to criticism from multiple directions. Many on the right of the political spectrum in the United States and Europe claim that YouTube unfairly blocks them. Some civil society groups say YouTube should do more to stop the spread of illicit content and misinformation… Roughly 500 hours of video are uploaded to YouTube every minute globally in different languages. “It’s impossible to get our minds around what it means to try and govern that kind of volume of content,” said Evelyn Douek, senior research fellow at the Knight First Amendment Institute at Columbia University. “YouTube is a juggernaut, by some metrics as big or bigger than Facebook.”

In its email on Tuesday morning, YouTube said Novara Media [a left-leaning London news group] was guilty of “repeated violations” of YouTube’s community guidelines, without elaborating. Novara’s staff was left guessing what had caused the problem. YouTube typically has a three-strikes policy before deleting a channel. It had penalized Novara only once before… Novara’s last show released before the deletion was about sewage policy, which hardly seemed worthy of YouTube’s attention. One of the organization’s few previous interactions with YouTube was when the video service sent Novara a silver plaque for reaching 100,000 subscribers…

Staff members worried it had been a coordinated campaign by critics of their coverage to file complaints with YouTube, triggering its software to block their channel, a tactic sometimes used by right-wing groups to go after opponents…. An editor, Gary McQuiggin, filled out YouTube’s online appeal form. He then tried using YouTube’s online chat bot, speaking with a woman named “Rose,” who said, “I know this is important,” before the conversation crashed. Angry and frustrated, Novara posted a statement on Twitter and other social media services about the deletion. “We call on YouTube to immediately reinstate our account,” it said. The post drew attention in the British press and from members of Parliament.

Within a few hours, Novara’s channel had been restored. Later, YouTube said Novara had been mistakenly flagged as spam, without providing further detail.
“We work quickly to review all flagged content,” YouTube said in a statement, “but with millions of hours of video uploaded on YouTube every day, on occasion we make the wrong call ”

But Ed Procter, chief executive of the Independent Monitor for the Press, told the Times that it was at least the fifth time that a news outlet had material deleted by YouTube, Facebook or Twitter without warning.

Source: ‘A Mistake by YouTube Shows Its Power Over Media’ – Slashdot

So if you have friends in Parliament you can get YouTube to have a look at unbanning you, but if you only have a few hundred thousand followers you are fucked.

It’s a bit like Amazon, except more people depend on the Amazon marketplace for a living:

At Amazon, Some Brands Get More Protection From Fakes Than Others

Dirty dealing in the $175 billion Amazon Marketplace

Amazon’s Alexa Collects More of Your Data Than Any Other Smart Assistant

Our smart devices are listening. Whether it’s personally identifiable information, location data, voice recordings, or shopping habits, our smart assistants know far more than we realize.

[…]

All five services collect your name, phone number, device location, and IP address; the names and numbers of your contacts; your interaction history; and the apps you use. If you don’t like that information being stored, you probably shouldn’t use a voice assistant.

[…]

data collection

Keep in mind that no voice assistant provider is truly interested in protecting your privacy. For instance, Google Assistant and Cortana maintain a log of your location history and routers, Alexa and Bixby record your purchase history, and Siri tracks who is in your Apple Family.

[…]

If you’re looking to take control of your smart assistant, you can stop Alexa from sending your recordings to Amazon, turn off Google Assistant and Bixby, and manage Siri‘s data collection habits.

Source: Amazon’s Alexa Collects More of Your Data Than Any Other Smart Assistant

Internet Service Providers Collect, Sell Horrifying Amount of Sensitive Data, Government Study Concludes

The new FTC report studied the privacy practices of six unnamed broadband ISPs and their advertising arms, and found that the companies routinely collect an ocean of consumer location, browsing, and behavioral data. They then share this data with dodgy middlemen via elaborate business arrangements that often aren’t adequately disclosed to broadband consumers.

“Even though several of the ISPs promise not to sell consumers personal data, they allow it to be used, transferred, and monetized by others and hide disclosures about such practices in fine print of their privacy policies,” the FTC report said.

The FTC also found that while many ISPs provide consumers tools allowing them to opt out of granular data collection, those tools are cumbersome to use—when they work at all. 

[…]

The agency’s report also found that while ISPs promise to only keep consumer data for as long as needed for “business purposes,” the definition of what constitutes a “business purpose” is extremely broad and varies among broadband providers and wireless carriers.

The report repeatedly cites Motherboard reporting showing how wireless companies have historically sold sensitive consumer location data to dubious third parties, often without user consent. This data has subsequently been abused from everyone from bounty hunters and stalkers to law enforcement and those posing as law enforcement.

The FTC was quick to note that because ISPs have access to the entirety of the data that flows across the internet and your home network, they often have access to even more data than what’s typically collected by large technology companies, ad networks, and app makers.


That includes the behavior of internet of things devices connected to your network, your daily movements, your online browsing history, clickstream data (not only which sites you visit but how much time you linger there), email and search data, race and ethnicity data, DNS records, your cable TV viewing habits, and more.

In some instances ISPs have even developed tracking systems that embed each packet a user sends over the internet with an individual identifier, allowing monitoring of user behavior in granular detail. Wireless carrier Verizon was fined $1.3 million in 2016 for implementing such a system without informing consumers or letting them opt out.

“Unlike traditional ad networks whose tracking consumers can block through browser or mobile device settings, consumers cannot use these tools to stop tracking by these ISPs, which use ‘supercookie’ technology to persistently track users,” the FTC report said.

[…]

Source: Internet Service Providers Collect, Sell Horrifying Amount of Sensitive Data, Government Study Concludes

Client-Side Scanning Is An Insecure Nightmare Just Waiting To Be Exploited By Governments or companies. Apple basically installing spyware under a nice name.

In August, Apple declared that combating the spread of CSAM (child sexual abuse material) was more important than protecting millions of users who’ve never used their devices to store or share illegal material. While encryption would still protect users’ data and communications (in transit and at rest), Apple had given itself permission to inspect data residing on people’s devices before allowing it to be sent to others.

This is not a backdoor in a traditional sense. But it can be exploited just like an encryption backdoor if government agencies want access to devices’ contents or mandate companies like Apple do more to halt the spread of other content governments have declared troublesome or illegal.

Apple may have implemented its client-side scanning carefully after weighing the pros and cons of introducing a security flaw, but there’s simply no way to engage in this sort of scanning without creating a very large and slippery slope capable of accommodating plenty of unwanted (and unwarranted) government intercession.

Apple has put this program on hold for the time being, citing concerns raised by pretty much everyone who knows anything about client-side scanning and encryption. The conclusions that prompted Apple to step away from the precipice of this slope (at least momentarily) have been compiled in a report [PDF] on the negative side effects of client-side scanning, written by a large group of cybersecurity and encryption experts

[…]

Only policy decisions prevent the scanning expanding from illegal abuse images to other material of interest to governments; and only the lack of a software update prevents the scanning expanding from static images to content stored in other formats, such as voice, text, or video.

And if people don’t think governments will demand more than Apple’s proactive CSAM efforts, they haven’t been paying attention. CSAM is only the beginning of the list of content governments would like to see tech companies target and control.

While the Five Eyes governments and Apple have been talking about child sex-abuse material (CSAM) —specifically images— in their push for CSS, the European Union has included terrorism and organized crime along with sex abuse. In the EU’s view, targeted content extends from still images through videos to text, as text can be used for both sexual solicitation and terrorist recruitment. We cannot talk merely of “illegal” content, because proposed UK laws would require the blocking online of speech that is legal but that some actors find upsetting.

Once capabilities are built, reasons will be found to make use of them. Once there are mechanisms to perform on-device censorship at scale, court orders may require blocking of nonconsensual intimate imagery, also known as revenge porn. Then copyright owners may bring suit to block allegedly infringing material.

That’s just the policy and law side. And that’s only a very brief overview of clearly foreseeable expansions of CSS to cover other content, which also brings with it concerns about it being used as a tool for government censorship. Apple has already made concessions to notoriously censorial governments like China’s in order to continue to sell products and services there.

[…]

CSS is at odds with the least-privilege principle. Even if it runs in middleware, its scope depends on multiple parties in the targeting chain, so it cannot be claimed to use least-privilege in terms of the scanning scope. If the CSS system is a component used by many apps, then this also violates the least-privilege principle in terms of scope. If it runs at the OS level, things are worse still, as it can completely compromise any user’s device, accessing all their data, performing live intercept, and even turning the device into a room bug.

CSS has difficulty meeting the open-design principle, particularly when the CSS is for CSAM, which has secrecy requirements for the targeted content. As a result, it is not possible to publicly establish what the system actually does, or to be sure that fixes done in response to attacks are comprehensive. Even a meaningful audit must trust that the targeted content is what it purports to be, and so cannot completely test the system and all its failure modes.

Finally, CSS breaks the psychological-acceptability principle by introducing a spy in the owner’s private digital space. A tool that they thought was theirs alone, an intimate device to guard and curate their private life, is suddenly doing surveillance on behalf of the police. At the very least, this takes the chilling effect of surveillance and brings it directly to the owner’s fingertips and very thoughts.

[…]

Despite this comprehensive report warning against the implementation of client-side scanning, there’s a chance Apple may still roll its version out. And once it does, the pressure will be on other companies to do at least as much as Apple is doing to combat CSAM.

Source: Report: Client-Side Scanning Is An Insecure Nightmare Just Waiting To Be Exploited By Governments | Techdirt

CSAM is like installing listening software on a device. Once anyone has access to install whatever they like, there is nothing stopping them from listening in to everything. Despite the technically interesting name CSAM basically it’s talking about the manufacturer installing spyware on your device.

Facial recognition scheme in place in some British schools – more to come

Facial recognition technology is being employed in more UK schools to allow pupils to pay for their meals, according to reports today.

In North Ayrshire Council, a Scottish authority encompassing the Isle of Arran, nine schools are set to begin processing meal payments for school lunches using facial scanning technology.

The authority and the company implementing the technology, CRB Cunninghams, claim the system will help reduce queues and is less likely to spread COVID-19 than card payments and fingerprint scanners, according to the Financial Times.

Speaking to the publication, David Swanston, the MD of supplier CRB Cunninghams, said the cameras verify the child’s identity against “encrypted faceprint templates”, and will be held on servers on-site at the 65 schools that have so far signed up.

[…]

North Ayrshire council said 97 per cent of parents had given their consent for the new system, although some said they were unsure whether their children had been given enough information to make their decision.

Seemingly unaware of the controversy surrounding facial recognition, education solutions provider CRB Cunninghams announced its introduction of the technology in schools in June as the “next step in cashless catering.”

[…]

Privacy campaigners voiced concerns that moving the technology into schools merely for payment was needlessly normalising facial recognition.

“No child should have to go through border style identity checks just to get a school meal,” Silkie Carlo of the campaign group Big Brother Watch told The Reg.

“We are supposed to live in a democracy, not a security state. This is highly sensitive, personal data that children should be taught to protect, not to give away on a whim. This biometrics company has refused to disclose who else children’s personal information could be shared with and there are some red flags here for us. “Facial recognition technology typically suffers from inaccuracy, particularly for females and people of colour, and we’re extremely concerned about how this invasive and discriminatory system will impact children.”

[…]

Those concerned about the security of schools systems now storing children’s biometric data will not be assured by the fact that educational establishments have become targets for cyber-attacks.

In March, the Harris Federation, a not-for-profit charity responsible for running 50 primary and secondary academies in London and Essex, became the latest UK education body to fall victim to ransomware. The institution said it was “at least” the fourth multi-academy trust targeted just that month alone. Meanwhile, South and City College Birmingham earlier this year told 13,000 students that all lectures would be delivered via the web because a ransomware attack had disabled its core IT systems.

[…]

Source: Facial recognition scheme in place in some British schools • The Register

The students probably gave their consent because if they didn’t, they wouldn’t get any lunch. The problem with biometrics is that they don’t change. So if someone steals yours, then it’s stolen forever. It’s not a password you can reset.

Why does dutch supermarket Albert Heijn have camera’s looking at you at the self check out?

The Party for the Animals (PvdD) wants clarity from outgoing minister Dekker for Legal Protection about a camera on Albert Heijn’s self-scanner. It concerns the PS20 from manufacturer Zebra. According to this company, the camera on the self-scanner supports facial recognition to automatically identify customers. PvdD MPs Van Raan and Wassenberg want to know whether facial recognition is used in Albert Heijn stores in any way. The minister must also explain what legal basis Albert Heijn or other supermarket chains can rely on if they decide to use facial recognition. Finally, the PvdD MPs want to know what Minister Dekker can do to prevent supermarkets from using facial recognition now or in the future.

Source: PvdD wil opheldering over camera op zelfscanner van Albert Heijn – Emerce

Canon Sued for Disabling All-in-One Printer When Ink Runs Out

A customer fed up with the tyranny of home printers is suing Canon for disabling multiple functions on an all-in-one printer when it runs out of ink.

Consumer printer makers have long used the razor blade business model—so named after companies who sell razor handles for cheap, but the compatible replacement blades at much higher prices.

[…]

The advent of devices like smartphones and even social media have made sharing photos digitally much easier, which means consumers are printing photos less and less. That has had an effect on the profitability of home printers

[…]

Leacraft, who is named as the plaintiff in a class-action complaint against Canon filed in a U.S. federal court in New York last week, found that their Canon Pixma MG6320 all-in-one printer would no longer scan or fax documents when it was out of ink, despite neither of those functions requiring any printing at all. According to Bleeping Computer, it’s an issue that dates back to at least 2016 when other customers reported the same problem to Canon through the company’s online forums, and were told by the company’s support people that all the ink cartridges must be installed and contain ink to use all of the printer’s features.

[…]

The complaint points out that Canon promotes its all-in-one printers as having multiple distinct features, including printing, copying, scanning, and sometimes even faxing, but without any warnings that those features are dependent on sufficient levels of ink being available.

[…]

Source: Canon Sued for Disabling All-in-One Printer When Ink Runs Out

At Amazon, Some Brands Get More Protection From Fakes Than Others

There are two classes of merchant on Amazon.com: those who get special protection from counterfeiters and those who don’t. From a report: The first category includes sellers of some big-name brands, such as Adidas, Apple and even Amazon itself. They benefit from digital fortifications that prevent unauthorized sellers from listing certain products — an iPhone, say, or eero router — for sale. Many lesser-known brands belong to the second group and have no such shield. Fred Ruckel, inventor of a popular cat toy called the Ripple Rug, is one of those sellers. A few months ago, knockoff artists began selling versions of his product, siphoning off tens of thousands of dollars in sales and forcing him to spend weeks trying have the interlopers booted off the site.

Amazon’s marketplace has long been plagued with fakes, a scourge that has made household names like Nike leery of putting their products there. While most items can be uploaded freely to the site, Amazon by 2016 had begun requiring would-be sellers of a select group of products to get permission to list them. The company doesn’t publicize the program, but in the merchant community it has become known as “brand gating.” Of the millions of products sold on Amazon, perhaps thousands are afforded this kind of protection, people who advise sellers say. Most merchants, many of them small businesses, rely on Amazon’s algorithms to ferret out fakes before they appear — an automated process that dedicated scammers have managed to evade.

Source: At Amazon, Some Brands Get More Protection From Fakes Than Others – Slashdot

Moscow metro launches facial recognition payment system despite privacy concerns

More than 240 metro stations across Moscow now allow passengers to pay for a ride by looking at a camera. The Moscow metro has launched what authorities say is the first mass-scale deployment of a facial recognition payment system. According to The Guardian, passengers can access the payment option called FacePay by linking their photo, bank card and metro card to the system via the Mosmetro app. “Now all passengers will be able to pay for travel without taking out their phone, Troika or bank card,” Moscow mayor Sergey Sobyanin tweeted.

In the official Moscow website’s announcement, the country’s Department of Transport said all Face Pay information will be encrypted. The cameras at the designated turnstyles will read a passenger’s biometric key only, and authorities said information collected for the system will be stored in data centers that can only be accessed by interior ministry staff. Moscow’s Department of Information Technology has also assured users that photographs submitted to the system won’t be handed over to the cops.

Still, privacy advocates are concerned over the growing use of facial recognition in the city. Back in 2017, officials added facial recognition tech to the city’s 170,000 security cameras as part of its efforts to ID criminals on the street. Activists filed a case against Moscow’s Department of Technology a few years later in hopes of convincing the courts to ban the use of the technology. However, a court in Moscow sided with the city, deciding that its use of facial recognition does not violate the privacy of citizens. Reuters reported earlier this year, though, that those cameras were also used to identify protesters who attended rallies.

Stanislav Shakirov, the founder of Roskomsvoboda, a group that aims to protect Russians’ digital rights, said in a statement:

“We are moving closer to authoritarian countries like China that have mastered facial technology. The Moscow metro is a government institution and all the data can end up in the hands of the security services.”

Meanwhile, the European Parliament called on lawmakers in the EU earlier this month to ban automated facial recognition in public spaces. It cited evidence that facial recognition AI can still misidentify PoCs, members of the LGBTI+ community, seniors and women at higher rates. In the US, local governments are banning the use of the technology in public spaces, including statewide bans by Massachusetts and Maine. Four Democratic lawmakers also proposed a bill to ban the federal government from using facial recognition.

Source: Moscow metro launches facial recognition payment system despite privacy concerns | Engadget

Of course one of the huge problems with biometrics is that you can’t change them. Once you are compromised, you can’t go and change the password.

Tesla’s Bringing Car Insurance to Texas W/ New ‘Safety Score’ by eating and selling your location data

After two years of offering car insurance to drivers across California, Tesla’s officially bringing a similar offering to clientele in its new home state of Texas. As Electrek first reported, the big difference between the two is how drivers’ premiums are calculated: in California, the prices were largely determined by statistical evaluations. In Texas, your insurance costs will be calculated in real-time, based on your driving behavior.

Tesla says it grades this behavior using the “Safety Score” feature—the in-house metric designed by the company in order to estimate a driver’s chance of future collision. These scores were recently rolled out in order to screen drivers that were interested in testing out Tesla’s “Full Self Driving” software, which, like the Safety Score itself, is currently in beta. And while the self-driving software release date is, um, kind of up in the air for now, Tesla drivers in the lone-star state can use their safety score to apply for quotes on Tesla’s website as of today.

As Tesla points out in its own documents, relying on a single score makes the company a bit of an outlier in the car insurance market. Most traditional insurers round up a driver’s costs based on a number of factors that are wholly unrelated to their actual driving: depending on the state, this can include age, gender, occupation, and credit score, all playing a part in defining how much a person’s insurance might cost.

Tesla, on the other hand, relies on a single score, which the company says get tallied up based on five different factors: the number of forward-collision warnings you get every 1,000 miles, the number of times you “hard brake,” how often you take too-fast turns, how closely you drive behind other drivers, and how often they take their hands off the wheel when Autopilot is engaged.

[…]

Source: Tesla’s Bringing Car Insurance to Texas W/ New ‘Safety Score’

The idea sounds reasonable – but giving Tesla my location data and allowing them to process and sell that doesn’t.

Researchers show Facebook’s ad tools can target a single specific user

A new research paper written by a team of academics and computer scientists from Spain and Austria has demonstrated that it’s possible to use Facebook’s targeting tools to deliver an ad exclusively to a single individual if you know enough about the interests Facebook’s platform assigns them.

The paper — entitled “Unique on Facebook: Formulation and Evidence of (Nano)targeting Individual Users with non-PII Data” — describes a “data-driven model” that defines a metric showing the probability a Facebook user can be uniquely identified based on interests attached to them by the ad platform.

The researchers demonstrate that they were able to use Facebook’s Ads manager tool to target a number of ads in such a way that each ad only reached a single, intended Facebook user.

[…]

Source: Researchers show Facebook’s ad tools can target a single user | TechCrunch

Amazon copied products and rigged search results, documents show

Amazon.com Inc has been repeatedly accused of knocking off products it sells on its website and of exploiting its vast trove of internal data to promote its own merchandise at the expense of other sellers. The company has denied the accusations.

But thousands of pages of internal Amazon documents examined by Reuters – including emails, strategy papers and business plans – show the company ran a systematic campaign of creating knockoffs and manipulating search results to boost its own product lines in India, one of the company’s largest growth markets.

The documents reveal how Amazon’s private-brands team in India secretly exploited internal data from Amazon.in to copy products sold by other companies, and then offered them on its platform. The employees also stoked sales of Amazon private-brand products by rigging Amazon’s search results so that the company’s products would appear, as one 2016 strategy report for India put it, “in the first 2 or three … search results” when customers were shopping on Amazon.in.

Among the victims of the strategy: a popular shirt brand in India, John Miller, which is owned by a company whose chief executive is Kishore Biyani, known as the country’s “retail king.” Amazon decided to “follow the measurements of” John Miller shirts down to the neck circumference and sleeve length, the document states.

[…]

Source: Amazon copied products and rigged search results, documents show

Software Removes The Facebook From Facebook’s VR Headset (Mostly)

It’s not a jailbreak, but [basti564]’s Oculess software nevertheless allows one the option to remove telemetry and account dependencies from Facebook’s Oculus Quest VR headsets. It is not normally possible to use these devices without a valid Facebook account (or a legacy Oculus account in the case of the original Quest), so the ability to flip any kind of disconnect switch without bricking the hardware is a step forward, even if there are a few caveats to the process.

To be clear, the Quest devices still require normal activation and setup via a Facebook account. But once that initial activation is complete, Oculess allows one the option of disabling telemetry or completely disconnecting the headset from its Facebook account.

[…]

Source: Software Removes The Facebook From Facebook’s VR Headset (Mostly) | Hackaday

Study reveals Android phones constantly snoop on their users

A new study by a team of university researchers in the UK has unveiled a host of privacy issues that arise from using Android smartphones.

The researchers have focused on Samsung, Xiaomi, Realme, and Huawei Android devices, and LineageOS and /e/OS, two forks of Android that aim to offer long-term support and a de-Googled experience

The conclusion of the study is worrying for the vast majority of Android users .

With the notable exception of /e/OS, even when minimally configured and the handset is idle these vendor-customized Android variants transmit substantial amounts of information to the OS developer and also to third parties (Google, Microsoft, LinkedIn, Facebook, etc.) that have pre-installed system apps. – Researchers.

As the summary table indicates, sensitive user data like persistent identifiers, app usage details, and telemetry information are not only shared with the device vendors, but also go to various third parties, such as Microsoft, LinkedIn, and Facebook.

Summary of collected data
Summary of collected data
Source: Trinity College Dublin

And to make matters worse, Google appears at the receiving end of all collected data almost across the entire table.

No way to “turn it off”

It is important to note that this concerns the collection of data for which there’s no option to opt-out, so Android users are powerless against this type of telemetry.

This is particularly concerning when smartphone vendors include third-party apps that are silently collecting data even if they’re not used by the device owner, and which cannot be uninstalled.

For some of the built-in system apps like miui.analytics (Xiaomi), Heytap (Realme), and Hicloud (Huawei), the researchers found that the encrypted data can sometimes be decoded, putting the data at risk to man-in-the-middle (MitM) attacks.

Volume of data (KB/h) transmitted by each vendor
Volume of data (KB/h) transmitted by each vendor
Source: Trinity College Dublin

As the study points out, even if the user resets the advertising identifiers for their Google Account on Android, the data-collection system can trivially re-link the new ID back to the same device and append it to the original tracking history..

The deanonymisation of users takes place using various methods, such as looking at the SIM, IMEI, location data history, IP address, network SSID, or a combination of these.

Potential cross-linking data collection points
Potential cross-linking data collection points
Source: Trinity College Dublin

Privacy-conscious Android forks like /e/OS are getting more traction as increasing numbers of users realize that they have no means to disable the unwanted functionality in vanilla Android and seek more privacy on their devices.

However, the majority of Android users remain locked into never ending stream of data collection, which is where regulators and consumer protection organizations need to step in and to put an end to this.

Gael Duval, the creator of /e/OS has told BleepingComputer:

Today, more people understand that the advertising model that is fueling the mobile OS business is based on the industrial capture of personal data at a scale that has never been seen in history, at the world level. This has negative impacts on many aspects of our lives, and can even threaten democracy as seen in recent cases. I think regulation is needed more than ever regarding personal data protection. It has started with the GDPR, but it’s not enough and we need to switch to a “privacy by default” model instead of “privacy as an option”.

Update – A Google spokesperson has provided BleepingComputer the following comment on the findings of the study:

While we appreciate the work of the researchers, we disagree that this behavior is unexpected – this is how modern smartphones work. As explained in our Google Play Services Help Center article, this data is essential for core device services such as push notifications and software updates across a diverse ecosystem of devices and software builds. For example, Google Play services uses data on certified Android devices to support core device features. Collection of limited basic information, such as a device’s IMEI, is necessary to deliver critical updates reliably across Android devices and apps.

Source: Study reveals Android phones constantly snoop on their users

Facebook Banned Creator of Unfollow Everything App That Made Facebook Less Toxic

A developer who created a browser extension designed to help Facebook users reduce their time spent on the platform says that the company responded by banning him and threatening to take legal action.

Louis Barclay says he created Unfollow Everything to help people enjoy Facebook more, not less. His extension, which no longer exists, allowed users to automatically unfollow everybody on their FB account, thus eliminating the newsfeed feature, one of the more odious, addictive parts of the company’s product. The feed, which allows for an endless barrage of targeted advertising, is powered by follows, not friends, so even without it, users can still visit the profiles they want to and navigate the site like normal.

The purpose of bucking the feed, Barclay says, was to allow users to enjoy the platform in a more balanced, targeted fashion, rather than being blindly coerced into constant engagement by Facebook’s algorithms.

How did Facebook reward Barclay for trying to make its user experience less toxic? Well, first it booted him off of all of its platforms—locking him out of his Facebook and Instagram accounts. Then, it sent him a cease and desist letter, threatening legal action if he didn’t shut the browser extension down. Ultimately, Barclay said he was forced to do so, and Unfollow Everything no longer exists. He recently wrote about his experience in an op-ed for Slate, saying:

If someone built a tool that made Facebook less addictive—a tool that allowed users to benefit from Facebook’s positive features while limiting their exposure to its negative ones—how would Facebook respond?

I know the answer, because I built the tool, and Facebook squashed it.

Source: Facebook Banned Creator of App That Made Facebook Less Toxic

England’s Data Guardian warns of plans to grant police access to patient data

England’s National Data Guardian has warned that government plans to allow data sharing between NHS bodies and the police could “erode trust and confidence” in doctors and other healthcare providers.

Speaking to the Independent newspaper, Dr Nicola Byrne said she had raised concerns with the government over clauses in the Police, Crime, Sentencing and Courts Bill.

The bill, set to go through the House of Lords this month, could force NHS bodies such as commissioning groups to share data with police and other specified authorities to prevent and reduce serious violence in their local areas.

Dr Byrne said the proposed law could “erode trust and confidence, and deter people from sharing information, and even from presenting for clinical care.”

Meanwhile, the bill [PDF] did not detail what information it would cover, she said. “The case isn’t made as to why that is necessary. These things need to be debated openly and in public.”

In a blog published last week, Dr Byrne said the bill imposes a duty on clinical groups in the NHS to disclose information to police without breaching any obligation of patient confidentiality.

“Whilst tackling serious violence is important, it is essential that the risks and harms that this new duty pose to patient confidentiality, and thereby public trust, are engaged with and addressed,” she said.

[…]

Source: England’s Data Guardian warns of plans to grant police access to patient data • The Register