The Linkielist

Linking ideas with the world

The Linkielist

Brave’s De-AMP feature bypasses harmful Google AMP pages

Brave announced a new feature for its browser on Tuesday: De-AMP, which automatically jumps past any page rendered with Google’s Accelerated Mobile Pages framework and instead takes users straight to the original website. “Where possible, De-AMP will rewrite links and URLs to prevent users from visiting AMP pages altogether,” Brave said in a blog post. “And in cases where that is not possible, Brave will watch as pages are being fetched and redirect users away from AMP pages before the page is even rendered, preventing AMP / Google code from being loaded and executed.”

Brave framed De-AMP as a privacy feature and didn’t mince words about its stance toward Google’s version of the web. “In practice, AMP is harmful to users and to the Web at large,” Brave’s blog post said, before explaining that AMP gives Google even more knowledge of users’ browsing habits, confuses users, and can often be slower than normal web pages. And it warned that the next version of AMP — so far just called AMP 2.0 — will be even worse.

Brave’s stance is a particularly strong one, but the tide has turned hard against AMP over the last couple of years. Google originally created the framework in order to simplify and speed up mobile websites, and AMP is now managed by a group of open-source contributors. It was controversial from the very beginning and smelled to some like Google trying to exert even more control over the web. Over time, more companies and users grew concerned about that control and chafed at the idea that Google would prioritize AMP pages in search results. Plus, the rest of the internet eventually figured out how to make good mobile sites, which made AMP — and similar projects like Facebook Instant Articles — less important.

A number of popular apps and browser extensions make it easy for users to skip over AMP pages, and in recent years, publishers (including The Verge’s parent company Vox Media) have moved away from using it altogether. AMP has even become part of the antitrust fight against Google: a lawsuit alleged that AMP helped centralize Google’s power as an ad exchange and that Google made non-AMP ads load slower.

[…]

Source: Brave’s De-AMP feature bypasses ‘harmful’ Google AMP pages – The Verge

Boris Johnson, Catalan Activists Hit With NSO Spyware: Report

Spyware manufactured by the NSO Group has been used to hack droves of high-profile European politicians and activists, The New Yorker reports. Devices associated with the British Foreign Office and the office of British Prime Minister Boris Johnson are allegedly among the targeted, as well as the phones of dozens of members of the Catalan independence movement.

The magazine’s report is partially based on a recently published analysis by Citizen Lab, a digital research unit with the University of Toronto that has been at the forefront of research into the spyware industry’s shadier side.

Citizen Lab researchers told The New Yorker that mobile devices connected to the British Foreign Office were hacked with Pegasus five times between July 2020 and June 2021. A phone connected to the office of 10 Downing Street, where British Prime Minister Boris Johnson works, was reportedly hacked using the malware on July 7, 2020. British government officials confirmed to the New Yorker that the offices appeared to have been targeted, while declining to specify NSO’s involvement.

Citizen Lab researchers also told The New Yorker that the United Arab Emirates is suspected to be behind the spyware attacks on 10 Downing Street. The UAE has been accused of being involved in a number of other high-profile hacking incidents involving Pegasus spyware.

[…]

Source: Boris Johnson, Catalan Activists Hit With NSO Spyware: Report

Cisco’s Webex phoned home audio telemetry even when muted

Boffins at two US universities have found that muting popular native video-conferencing apps fails to disable device microphones – and that these apps have the ability to access audio data when muted, or actually do so.

The research is described in a paper titled, “Are You Really Muted?: A Privacy Analysis of Mute Buttons in Video Conferencing Apps,” [PDF] by Yucheng Yang (University of Wisconsin-Madison), Jack West (Loyola University Chicago), George K. Thiruvathukal (Loyola University Chicago), Neil Klingensmith (Loyola University Chicago), and Kassem Fawaz (University of Wisconsin-Madison).

The paper is scheduled to be presented at the Privacy Enhancing Technologies Symposium in July.

[…]

Among the apps studied – Zoom (Enterprise), Slack, Microsoft Teams/Skype, Cisco Webex, Google Meet, BlueJeans, WhereBy, GoToMeeting, Jitsi Meet, and Discord – most presented only limited or theoretical privacy concerns.

The researchers found that all of these apps had the ability to capture audio when the mic is muted but most did not take advantage of this capability. One, however, was found to be taking measurements from audio signals even when the mic was supposedly off.

“We discovered that all of the apps in our study could actively query (i.e., retrieve raw audio) the microphone when the user is muted,” the paper says. “Interestingly, in both Windows and macOS, we found that Cisco Webex queries the microphone regardless of the status of the mute button.”

They found that Webex, every minute or so, sends network packets “containing audio-derived telemetry data to its servers, even when the microphone was muted.”

[…]

Worse still from a security standpoint, while other apps encrypted their outgoing data stream before sending it to the operating system’s socket interface, Webex did not.

“Only in Webex were we able to intercept plaintext immediately before it is passed to the Windows network socket API,” the paper says, noting that the app’s monitoring behavior is inconsistent with the Webex privacy policy.

The app’s privacy policy states Cisco Webex Meetings does not “monitor or interfere with you your [sic] meeting traffic or content.”

[…]

Source: Cisco’s Webex phoned home audio telemetry even when muted • The Register

Mega-Popular Muslim Prayer Apps Were Secretly Harvesting Phone Numbers

Google recently booted over a dozen apps from its Play Store—among them Muslim prayer apps with 10 million-plus downloads, a barcode scanner, and a clock—after researchers discovered secret data-harvesting code hidden within them. Creepier still, the clandestine code was engineered by a company linked to a Virginia defense contractor, which paid developers to incorporate its code into their apps to pilfer users’ data.

While conducting research, researchers came upon a piece of code that had been implanted in multiple apps that was being used to siphon off personal identifiers and other data from devices. The code, a software development kit, or SDK, could “without a doubt be described as malware,” one researcher said.

For the most part, the apps in question appear to have served basic, repetitive functions—the sort that a person might download and then promptly forget about. However, once implanted onto the user’s phone, the SDK-laced programs harvested important data points about the device and its users like phone numbers and email addresses, researchers revealed.

The Wall Street Journal originally reported that the weird, invasive code, was discovered by a pair of researchers, Serge Egelman, and Joel Reardon, both of whom co-founded an organization called AppCensus, which audits mobile apps for user privacy and security. In a blog post on their findings, Reardon writes that AppCensus initially reached out to Google about their findings in October of 2021. However, the apps ultimately weren’t expunged from the Play store until March 25 after Google had investigated, the Journal reports

[…]

Source: Mega-Popular Muslim Prayer Apps Were Secretly Harvesting Phone Numbers

Microsoft is finally making it easier to switch default browsers in Windows 11

Microsoft is finally making it easier to change your default browser in Windows 11. A new update (KB5011563) has started rolling out this week that allows Windows 11 users to change a default browser with a single click. After testing the changes in December, this new one-click method is rolling out to all Windows 11 users.

Originally, Windows 11 shipped without a simple button to switch default browsers that was always available in Windows 10. Instead, Microsoft forced Windows 11 users to change individual file extensions or protocol handlers for HTTP, HTTPS, .HTML, and .HTM, or you had to tick a checkbox that only appeared when you clicked a link from outside a browser. Microsoft defended its decision to make switching defaults harder, but rival browser makers like Mozilla, Brave, and even Google’s head of Chrome criticized Microsoft’s approach.

Windows 11 now has a button to change default browsers.
Image: Tom Warren / The Verge

In the latest update to Windows 11, you can now head into the default apps section, search for your browser of choice, and then a button appears asking if you’d like to make it the default. All of the work of changing file handlers is done in a single click, making this a big improvement over what existed before.

[…]

Source: Microsoft is finally making it easier to switch default browsers in Windows 11 – The Verge

Browser Wars II

Bungie lawsuit aims to unmask YouTube copyright claim abusers

YouTube’s copyright claim system has been repeatedly abused for bogus takedown requests, and Bungie has had enough. TorrentFreak reports the game studio has sued 10 anonymous people for allegedly leveling false Digital Millennium Copyright Act (DMCA) claims against a host of Destiny 2 creators on YouTube, and even Bungie itself. The company said the culprits took advantage of a “hole” in YouTube’s DMCA security that let anyone claim to represent a rights holder, effectively letting “any person, anywhere” misuse the system to suit their own ends.

According to Bungie, the perpetrators created a Gmail account in mid-March that was intended to mimic the developer’s copyright partner CSC. They then issued DMCA takedown notices while falsely claiming to represent Bungie, and even tried to fool creators with another account that insisted the first was fraudulent. YouTube didn’t notice the fake credentials and slapped video producers with copyright strikes, even forcing users to remove videos if they wanted to avoid bans.

YouTube removed the strikes, suspended the Gmail accounts and otherwise let creators recover, but not before Bungie struggled with what it called a “circular loop” of support. The firm said it only broke the cycle by having its Global Finance Director email key Google personnel, and Google still “would not share” info to identify the fraudsters. Bungie hoped a DMCA subpoena and other measures would help identify the attackers and punish them, including damages that could reach $150,000 for each false takedown notice.

[…]

Source: Bungie lawsuit aims to unmask YouTube copyright claim abusers | Engadget

Ubiquiti Files Case Against Security Blogger Krebs Over ‘False Accusations’ (for doing his job)

In March of 2021 the Krebs on Security blog reported that Ubiquiti, “a major vendor of cloud-enabled Internet of Things devices,” had disclosed a breach exposing customer account credentials. But Krebs added that a company source “alleges” that Ubiquiti was downplaying the severity of the incident — which is not true, says Ubiquiti.

Krebs’ original post now includes an update — putting the word “breach” in quotation marks, and noting that actually a former Ubiquiti developer had been indicted for the incident…and also for trying to extort the company. It was that extortionist, Ubiquiti says, who’d “alleged” they were downplaying the incident (which the extortionist had actually caused themselves).

Ubiquiti is now suing Krebs, “alleging that he falsely accused the company of ‘covering up’ a cyberattack,” ITWire reports: In its complaint, Ubiquiti said contrary to what Krebs had reported, the company had promptly notified its clients about the attack and instructed them to take additional security precautions to protect their information. “Ubiquiti then notified the public in the next filing it made with the SEC. But Krebs intentionally disregarded these facts to target Ubiquiti and increase ad revenue by driving traffic to his website, www.KrebsOnSecurity.com,” the complaint alleged.

It said there was no evidence to support Krebs’ claims and only one source, [the indicted former employee] Nickolas Sharp….

According to the indictment issued by the Department of Justice against Sharp in December 2021, after publication of the articles in question on 30 and 31 March, Ubiquiti’s stock price fell by about 20% and the company lost more than US$4 billion (A$5.32 billion) in market capitalisation…. The complaint alleged Krebs had intentionally misrepresented the truth because he had a financial incentive to do so, adding, “His entire business model is premised on publishing stories that conform to this narrative….”

[…]

Krebs was accused of two counts of defamation, with Ubiquiti seeking a jury trial and asking for a judgment against him that awarded compensatory damages of more than US$75,000, punitive damages of US$350,000, all expenses and costs including lawyers’ fees and any further relief deemed appropriate by the court.

Source: Ubiquiti Files Case Against Security Blogger Krebs Over ‘False Accusations’ – Slashdot

Ubiquiti’s security is spectacularly bad, with incidents like anyone with ssh / telnet access to access points being able to get in and read the database and change the root passwords. Their updates are few and far between and very poorly communicated (if at all) to clients who don’t have a UNP machine. They did not notify me about the breach until some time after Krebs broke and then only in the vaguest of terms.

To blame a reporting party for your own failings is flailing around like a little kid and it’s a disgrace that the legal system allows for this kind of bullying around.

Copyright Is Indispensable For Artists, They Say; But For All Artists, Or Just Certain Kinds?

One of the central “justifications” for copyright is that it is indispensable if creativity is to be viable. Without it, we are assured, artists would starve. This ignores the fact that artists created and thrived for thousands of years before the 1710 Statute of Anne. But leaving that historical detail aside, as well as the larger question of the claimed indispensability of copyright, a separate issue is whether copyright is a good fit for all creativity, or whether it has inherent biases that few like to talk about.

One person who does talk about them is Kevin J. Greene, John J. Schumacher Chair Professor of Law at Southwestern Law School in Los Angeles. In his 2008 paper “‘Copynorms,’ Black Cultural Production, and the Debate Over African-American Reparations” he writes:

To paraphrase Pink Floyd, there’s a dark sarcasm in the stance of the entertainment industry regarding “copynorms” [respect for copyright]. Indeed, the “copynorms” rhetoric the entertainment industry espouses shows particular irony in light of its long history of piracy of the works of African-American artists, such as blues artists and composers.

In another analysis, Greene points out that several aspects of copyright are a poor fit for the way many artists create. For example:

The [US] Copyright Act requires that “a work of authorship must be “fixed in any tangible medium of expression, now known or later developed, from which [it] can be perceived, reproduced, or otherwise communicated, either directly or indirectly with the aid of a machine or device.” Although “race-neutral”, the fixation requirement has not served the ways Black artists create: “a key component of black cultural production is improvisation.” As a result, fixation deeply disadvantages African-American modes of cultural production, which are derived from an oral tradition and communal standards.

The same is true for much creativity outside the Western nations that invented the idea of copyright, and then proceeded to impose its norms on other nations, not least through trade agreements. Greene’s observation suggests that copyright is far from universally applicable, and may just be a reflection of certain cultural and historical biases. When people talk airily about how copyright is needed to support artists, it is important to ask them to specify which artists, and to examine then whether copyright really is such a good fit for their particular kind of creativity.

Source: Copyright Is Indispensable For Artists, They Say; But For All Artists, Or Just Certain Kinds? | Techdirt

EU, US strike preliminary deal to unlock transatlantic data flows – yup, the EU will let the US spy on it’s citizens freely again

Negotiators have been working on an agreement — which allows Europeans’ personal data to flow to the United States — since the EU’s top court struck down the Privacy Shield agreement in July 2020 because of fears that the data was not safe from access by American agencies once transferred across the Atlantic.

The EU chief’s comments Friday show both sides have reached a political breakthrough, coinciding with U.S. President Joe Biden’s visit to Brussels this week.

“I am pleased that we found an agreement in principle on a new framework for transatlantic data flows. This will enable predictable and trustworthy data flows between the EU and U.S., safeguarding privacy and civil liberties,” she said.

Biden said the framework would allow the EU “to once again authorize transatlantic data flows that help facilitate $7.1 trillion in economic relationships.”

Friday’s announcement will come as a relief to the hundreds of companies that had faced mounting legal uncertainty over how to shuttle everything from payroll information to social media post data to the U.S.

Officials on both sides of the Atlantic had been struggling to bridge an impasse over what it means to give Europeans’ effective legal redress against surveillance by U.S. authorities. Not all of those issues have been resolved, though von der Leyen’s comments Friday suggest technical solutions are within reach.

Despite the ripples of relief Friday’s announcement will send through the business community, any deal is likely to be challenged in the courts by privacy campaigners.

Source: EU, US strike preliminary deal to unlock transatlantic data flows – POLITICO

Virtual Kidnappers Are Scamming Parents Out of Millions of Dollars

[…]

cases have become so widespread that the bureau has a name for them: virtual kidnappings. “It’s a telephone extortion scheme,” says Arbuthnot, who heads up virtual-kidnapping investigations for the FBI out of Los Angeles. Because many of the crimes go unreported, the bureau doesn’t have a precise number on how widespread the scam is. But over the past few years, thousands of families like the Mendelsteins have experienced the same bizarre nightmare: a phone call, a screaming child, a demand for ransom money, and a kidnapping that — after painful minutes, hours, or even days — is revealed to be fake.

[…]

Valerie Sobel, a Beverly Hills resident who runs a charitable foundation, also received a call from a man who told her he had kidnapped her daughter. “We have your daughter’s finger,” he said. “Do you want the rest of her in a body bag?” As proof, the kidnapper said, he was putting her daughter on the phone. “Mom! Mom!” she heard her daughter cry. “Please help — I’m in big trouble!” Like Mendelstein, Sobel was told not to take any other calls. After getting the ransom money from her bank, she was directed to a MoneyGram facility, where she wired the cash to the kidnappers — only to discover that her daughter had never been abducted.

The cases weren’t just terrifying the victims; they were also rattling police officers, who found themselves scrambling to stop kidnappings that weren’t real. “They’re jumping fences, they’re breaking down doors to rescue people,” Arbuthnot tells me. The calls were so convincing that they even duped some in law enforcement.

[…]

I’m listening to a recording of a virtual kidnapping that Arbuthnot is playing for me, to demonstrate just how harrowing the calls can be. “It begins with the crying,” he says. “That’s what most people hear first: Help me, help me, help me, Mommy, Mommy, Daddy.”

Virtual kidnapping calls, like any other telemarketing pitch, are essentially a numbers game. “It’s literally cold-calling,” Arbuthnot tells me. “We’ll see 100 phone calls that are total failures, and then we’ll see a completely successful call. And all you need is one, right?”

The criminals start with a selected area code and then methodically work their way through the possible nine-digit combinations of local phone numbers. Not surprisingly, the first area where the police noticed a rash of calls was 310 — Beverly Hills. But it’s not enough to just get a potential mark to pick up. Virtual kidnapping is a form of hypnosis: The kidnappers need you to fall under their spell. In hacker parlance, they’re “social engineers,” dispassionately rewiring your reactions by psychologically manipulating you. That’s why they start with an emotional gut punch that’s almost impossible to ignore: a recording of a child crying for help.

The recordings are generic productions, designed to ensnare as many victims as possible. “They’re not that sophisticated,” Arbuthnot tells me. It’s a relatively simple process: The criminals get a young woman they know to pretend they’ve been kidnapped, and record their hysterical pleas. From there, the scheme follows one of two paths. Either you don’t have a kid, or suspect something is amiss, and hang up. Or, like many parents, you immediately panic at the sound of a terrified child.

Before you can form a rational thought, you blurt out your kid’s name, if only to make sense of what you’re hearing. Lisa? you say. Is that you? What’s wrong?

At that point, you’ve sealed your fate. Never mind that the screams you’re hearing aren’t those of your own kid. In a split second, you’ve not only bought into the con, but you’ve also given the kidnappers the one thing they need to make it stick. “We’ve kidnapped Lisa,” they tell you — and with that, your fear takes over. Adrenaline floods your bloodstream, your heart rate soars, your breath quickens, and your blood sugar spikes. No matter how skeptical or street-savvy you consider yourself, they’ve got you.

[…]

The other elements of virtual kidnappings are taken straight from the playbook for classic cons. Don’t give the mark time to think. Don’t let them talk to anyone else. Get them to withdraw an amount of cash they can get their hands on right away, and wire it somewhere untraceable. Convince them a single deviation from your instructions will cost them dearly.

[…]

the most innovative aspect of the scheme was the kidnapping calls: They were made from inside the prison in Mexico City, where Ramirez was serving time. “Who has time seven days a week, 12 hours a day, to make phone calls to the US, over and over and over, with a terrible success rate?” Arbuthnot says. “Prisoners. That was a really big moment for us. When we realized what was happening, it all made sense.”

[…]

there’s an obvious problem: Ramirez and Zuniga are already incarcerated, as the feds suspect is the case with almost every other virtual kidnapper who is still cold-calling potential victims. Which raises the question: How do you stop a crime that’s being committed by criminals you’ve already caught?

“What are we going to do?” Arbuthnot says. “We’re going to put these people in jail? They’re already in jail.”

[…]

 

Source: Virtual Kidnappers Are Scamming Parents Out of Millions of Dollars

Messages, Dialer apps sent text, call info to Google

Google’s Messages and Dialer apps for Android devices have been collecting and sending data to Google without specific notice and consent, and without offering the opportunity to opt-out, potentially in violation of Europe’s data protection law.

According to a research paper, “What Data Do The Google Dialer and Messages Apps On Android Send to Google?” [PDF], by Trinity College Dublin computer science professor Douglas Leith, Google Messages (for text messaging) and Google Dialer (for phone calls) have been sending data about user communications to the Google Play Services Clearcut logger service and to Google’s Firebase Analytics service.

“The data sent by Google Messages includes a hash of the message text, allowing linking of sender and receiver in a message exchange,” the paper says. “The data sent by Google Dialer includes the call time and duration, again allowing linking of the two handsets engaged in a phone call. Phone numbers are also sent to Google.”

The timing and duration of other user interactions with these apps has also been transmitted to Google. And Google offers no way to opt-out of this data collection.

[…]

From the Messages app, Google takes the message content and a timestamp, generates a SHA256 hash, which is the output of an algorithm that maps the human readable content to an alphanumeric digest, and then transmits a portion of the hash, specifically a truncated 128-bit value, to Google’s Clearcut logger and Firebase Analytics.

Hashes are designed to be difficult to reverse, but in the case of short messages, Leith said he believes some of these could be undone to recover some of the message content.

“I’m told by colleagues that yes, in principle this is likely to be possible,” Leith said in an email to The Register today. “The hash includes a hourly timestamp, so it would involve generating hashes for all combinations of timestamps and target messages and comparing these against the observed hash for a match – feasible I think for short messages given modern compute power.”

The Dialer app likewise logs incoming and outgoing calls, along with the time and the call duration.

[…]

The paper describes nine recommendations made by Leith and six changes Google has already made or plans to make to address the concerns raised in the paper. The changes Google has agreed to include:

  • Revising the app onboarding flow so that users are notified they’re using a Google app and are presented with a link to Google’s consumer privacy policy.
  • Halting the collection of the sender phone number by the CARRIER_SERVICES log source, of the 5 SIM ICCID, and of a hash of sent/received message text by Google Messages.
  • Halting the logging of call-related events in Firebase Analytics from both Google Dialer and Messages.
  • Shifting more telemetry data collection to use the least long-lived identifier available where possible, rather than linking it to a user’s persistent Android ID.
  • Making it clear when caller ID and spam protection is turned on and how it can be disabled, while also looking at way to use less information or fuzzed information for safety functions.

[…]

Leith said there are two larger matters related to Google Play Service, which is installed on almost all Android phones outside of China.

“The first is that the logging data sent by Google Play Services is tagged with the Google Android ID which can often be linked to a person’s real identity – so the data is not anonymous,” he said. “The second is that we know very little about what data is being sent by Google Play Services, and for what purpose(s). This study is the first to cast some light on that, but it’s very much just the tip of the iceberg.”

Source: Messages, Dialer apps sent text, call info to Google • The Register

HBO hit with class action lawsuit for allegedly sharing subscriber data with Facebook

HBO is facing a class action lawsuit over allegations that it gave subscribers’ viewing history to Facebook without proper permission, Variety has reported. The suit accuses HBO of providing Facebook with customer lists, allowing the social network to match viewing habits with their profiles.

It further alleges that HBO knows Facebook can combine the data because HBO is a major Facebook advertiser — and Facebook can then use that information to retarget ads to its subscribers. Since HBO never received proper customer consent to do this, it allegedly violated the 1988 Video Privacy Protection Act (VPPA), according to the lawsuit.

HBO, like other sites, discloses to users that it (and partners) use cookies to deliver personalized ads. However, the VPPA requires separate consent from users to share their video viewing history. “A standard privacy policy will not suffice,” according to the suit.

Other streaming providers have been hit with similar claims, and TikTok recently agreed to pay a $92 million settlement for (in part) violating the VPPA. In another case, however, a judge ruled in 2015 that Hulu didn’t knowingly share data with Facebook that could establish an individual’s viewing history. The law firm involved in the HBO suit previously won a $50 million settlement with Hearst after alleging that it violated Michigan privacy laws by selling subscriber data.

Source: HBO hit with class action lawsuit for allegedly sharing subscriber data with Facebook | Engadget

Italy slaps creepy webscraping facial recognition firm Clearview AI with €20 million fine

Italy’s data privacy watchdog said it will fine the controversial facial recognition firm Clearview AI for breaching EU law. An investigation by Garante, Italy’s data protection authority, found that the company’s database of 10 billion images of faces includes those of Italians and residents in Italy. The New York City-based firm is being fined €20 million, and will also have to delete any facial biometrics it holds of Italian nationals.

This isn’t the first time that the beleaguered facial recognition tech company is facing legal consequences. The UK data protection authority last November fined the company £17 million after finding its practices—which include collecting selfies of people without their consent from security camera footage or mugshots—violate the nation’s data protection laws. The company has also been banned in Sweden, France and Australia.

The accumulated fines will be a considerable blow for the now five-year old company, completely wiping away the $30 million it raised in its last funding round. But Clearview AI appears to be just getting started. The company is on track to patent its biometric database, which scans faces across public internet data and has been used by law enforcement agencies around the world, including police departments in the United States and a number of federal agencies. A number of Democrats have urged federal agencies to drop their contracts with Clearview AI, claiming that the tool is a severe threat to the privacy of everyday citizens. In a letter to the Department of Homeland Security, Sens. Ed Markey and Jeff Merkley and Reps. Pramila Jayapal and Ayanna Pressley urged regulators to discontinue their use of the tool.

“Clearview AI reportedly scrapes billions of photos from social media sites without permission from or notice to the pictured individuals. In conjunction with the company’s facial recognition capabilities, this trove of personal information is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified,” wrote the authors of the letter.

Despite losing troves of facial recognition data from entire countries, Clearview AI has a plan to rapidly expand this year. The company told investors that it is on track to have 100 billion photos of faces in its database within a year, reported The Washington Post. In its pitch deck, the company said it hopes to secure an additional $50 million from investors to build even more facial recognition tools and ramp up its lobbying efforts.

Source: Italy slaps facial recognition firm Clearview AI with €20 million fine | Engadget

Ice Cream Machine Repairers Sue McDonald’s for $900 Million

For years, the tiny startup Kytch worked to invent and sell a device designed to fix McDonald’s notoriously broken ice cream machines, only to watch the fast food Goliath crush their business like the hopes of so many would-be McFlurry customers. Now Kytch is instead seeking to serve out cold revenge—nearly a billion dollars worth of it.

Late Tuesday night, Kytch filed a long-expected legal complaint against McDonald’s, accusing the company of false advertising and tortious interference in its contracts with customers. Kytch’s cofounders, Melissa Nelson and Jeremy O’Sullivan, are asking for no less than $900 million in damages.

Since 2019, Kytch has sold a phone-sized gadget designed to be installed inside McDonald’s ice cream machines. Those Kytch devices would intercept the ice cream machines’ internal communications and send them out to a web or smartphone interface to help owners remotely monitor and troubleshoot the machines’ many foibles, which are so widely acknowledged that they’ve become a full-blown meme among McDonald’s customers. The two-person startup’s new claims against McDonald’s focus on emails the fast food giant sent to every franchisee in November 2020, instructing them to pull Kytch devices out of their ice cream machines immediately.

Those emails warned franchisees that the Kytch devices not only violated the ice cream machines’ warranties and intercepted their “confidential information” but also posed a safety threat and could lead to “serious human injury,” a claim that Kytch describes as false and defamatory. Kytch also notes that McDonald’s used those emails to promote a new ice cream machine, built by its longtime appliance manufacturing partner Taylor, that would offer similar features to Kytch. The Taylor devices, meanwhile, have yet to see public adoption beyond a few test installations.

Kytch cofounder Melissa Nelson says the emails didn’t just result in McDonald’s ice cream machines remaining broken around the world. (About one in seven of the machines in the US remained out of commission on Monday according to McBroken.com, which tracks the problem in real time.) They also kneecapped Kytch’s fast-growing sales just as the startup was taking off. “They’ve tarnished our name. They scared off our customers and ruined our business. They were anti-competitive. They lied about a product that they said would be released,” Nelson says. “McDonald’s had every reason to know that Kytch was safe and didn’t have any issues. It was not dangerous, like they claimed. And so we’re suing them.”

Before it found itself in conflict with soft-serve superpowers, Kytch had shown some early success in solving McDonald’s ice cream headaches. Its internet-connected add-on gadget helped franchisees avoid problems like hours of downtime when Taylor’s finicky daily pasteurization cycle failed. McDonald’s restaurant owners interviewed by WIRED liked the device; one said it saved him “easily thousands of dollars a month” from lost revenue and repair fees. Kytch says that by the end of 2020 it had 500 customers and was doubling its sales every quarter—all of which evaporated when McDonald’s ordered its franchisees to ditch Kytch’s gadgets.

Kytch first fired back against the fast-food ice cream establishment last May, suing Taylor and its distributor TFG for theft of trade secrets. The Kytch founders argued in that lawsuit that Taylor worked with TFG and one franchise owner to stealthily obtain a Kytch device, reverse-engineer it, and attempt to copy its features.

But all along, Kytch’s cofounders have hinted that they intended to use the discovery process in their lawsuit against Taylor to dig up evidence for a suit against McDonald’s too. In fact, the 800 pages of internal Taylor emails and presentations that Kytch has so far obtained in discovery show that it was McDonald’s, not Taylor, that at many points led the effort to study and develop a response to Kytch in 2020.

[…]

Source: Ice Cream Machine Hackers Sue McDonald’s for $900 Million | WIRED

UK Online Safety Bill to require more data to use social media – eg send them your passport

The country’s forthcoming Online Safety Bill will require citizens to hand over even more personal data to largely foreign-headquartered social media platforms, government minister Nadine Dorries has declared.

“The vast majority of social networks used in the UK do not require people to share any personal details about themselves – they are able to identify themselves by a nickname, alias or other term not linked to a legal identity,” said Dorries, Secretary of State for Digital, Culture, Media and Sport (DCMS).

Another legal duty to be imposed on social media platforms will be a requirement to give users a “block” button, something that has been part of most of today’s platforms since their launch.

“When it comes to verifying identities,” said DCMS in a statement, “some platforms may choose to provide users with an option to verify their profile picture to ensure it is a true likeness. Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify.”

“Alternatively,” continued the statement, “verification could include people using a government-issued ID such as a passport to create or update an account.”

Two-factor authentication is a login technology to prevent account hijacking by malicious people, not a method of verifying a user’s government-approved identity.

“People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms,” said Dorries.

Social networks offering services to Britons don’t currently require lots of personal data to register as a user. Most people see this as a benefit; the government seems to see it as a negative.

Today’s statement had led to widespread concerns that DCMS will place UK residents at greater risk of online identity theft or of falling victim to a data breach.

The Online Safety Bill was renamed from the Online Harms Bill shortly before its formal introduction to Parliament. Widely accepted as a disaster in the making by the technically literate, critics have said the bill risks creating an “algorithm-driven censorship future” through new regulations that would make it legally risky for platforms not to proactively censor users’ posts.

It is also closely linked to strong rhetoric discouraging end-to-end encryption rollouts for the sake of “minors”, and its requirements would mean that tech platforms attempting to comply would have to weaken security measures.

Parliamentary efforts at properly scrutinising the draft bill then led to the “scrutineers” instead publishing a manifesto asking for even more stronger legal weapons be included.

[…]

Source: Online Safety Bill to require more data to use social media

EU Data Watchdog Calls for Total Ban of Pegasus Spyware

Israeli authorities say it should be probed and U.S. authorities are calling for it to be sanctioned, but EU officials have a different idea for how to handle Pegasus spyware: just ban that shit entirely.

That’s the main takeaway from a new memo released by EPDS, the Union’s dedicated data watchdog on Tuesday, noting that a full-on ban across the entire region is the only appropriate response to the “unprecedented risks” the tech poses—not only to people’s devices but “to democracy and the rule of law.”

“As the specific technical characteristics of spyware tools like Pegasus make control over their use very difficult, we have to rethink the entire existing system of safeguards established to protect our fundamental rights and freedoms,” the report reads. “Pegasus constitutes a paradigm shift in terms of access to private communications and devices. This fact makes its use incompatible with our democratic values.”

A “paradigm shift” is a good way to describe the tool, which has been used to target a mounting number of civic actors, activists, and political figures from around the globe, including some notable figures from inside the EU. This past summer, local outlets reported that French president Emmanuel Macron surfaced among the list of potential targets that foreign actors had planned to target with the software, and later reports revealed traces of the tech appearing on phones from Macron’s current staffers. Officials from other EU member states like Hungary and Spain have also reported the tech on their devices, and Poland became the latest member to join the list last month when a team of researchers found the spyware being used to surveil three outspoken critics of the Polish government.

[…]

Source: EU Data Watchdog Calls for Total Ban of Pegasus Spyware

100 Billion Face Photos? Clearview AI tells investors it’s On Track to Identify ‘Almost Everyone in the World’

tThe Washington Post reports: Clearview AI is telling investors it is on track to have 100 billion facial photos in its database within a year, enough to ensure “almost everyone in the world will be identifiable,” according to a financial presentation from December obtained by The Washington Post.

Those images — equivalent to 14 photos for each of the 7 billion people on Earth — would help power a surveillance system that has been used for arrests and criminal investigations by thousands of law enforcement and government agencies around the world. And the company wants to expand beyond scanning faces for the police, saying in the presentation that it could monitor “gig economy” workers and is researching a number of new technologies that could identify someone based on how they walk, detect their location from a photo or scan their fingerprints from afar.

The 55-page “pitch deck,” the contents of which have not been reported previously, reveals surprising details about how the company, whose work already is controversial, is positioning itself for a major expansion, funded in large part by government contracts and the taxpayers the system would be used to monitor. The document was made for fundraising purposes, and it is unclear how realistic its goals might be. The company said that its “index of faces” has grown from 3 billion images to more than 10 billion since early 2020 and that its data collection system now ingests 1.5 billion images a month.

With $50 million from investors, the company said, it could bulk up its data collection powers to 100 billion photos, build new products, expand its international sales team and pay more toward lobbying government policymakers to “develop favorable regulation.”
The article notes that major tech companies like Amazon, Google, IBM and Microsoft have all limited or ended their own sales of facial recognition technology — adding that Clearview’s presentation simple describes this as a major business opportunity for themselves.

In addition, the Post reports Clearview’s presentation brags “that its product is even more comprehensive than systems in use in China, because its ‘facial database’ is connected to ‘public source metadata’ and ‘social linkage’ information.”

Source: 100 Billion Face Photos? Clearview AI tells investors it’s On Track to Identify ‘Almost Everyone in the World’ – Slashdot

It’s Back: Senators Want ‘EARN IT’ Bill To Scan All Online Messages by private companies – also misusing children as an excuse

A group of lawmakers have re-introduced the EARN IT Act, an incredibly unpopular bill from 2020 that “would pave the way for a massive new surveillance system, run by private companies, that would roll back some of the most important privacy and security features in technology used by people around the globe,” writes Joe Mullin via the Electronic Frontier Foundation. “It’s a framework for private actors to scan every message sent online and report violations to law enforcement. And it might not stop there. The EARN IT Act could ensure that anything hosted online — backups, websites, cloud photos, and more — is scanned.” From the report: The bill empowers every U.S. state or territory to create sweeping new Internet regulations, by stripping away the critical legal protections for websites and apps that currently prevent such a free-for-all — specifically, Section 230. The states will be allowed to pass whatever type of law they want to hold private companies liable, as long as they somehow relate their new rules to online child abuse. The goal is to get states to pass laws that will punish companies when they deploy end-to-end encryption, or offer other encrypted services. This includes messaging services like WhatsApp, Signal, and iMessage, as well as web hosts like Amazon Web Services. […]

Separately, the bill creates a 19-person federal commission, dominated by law enforcement agencies, which will lay out voluntary “best practices” for attacking the problem of online child abuse. Regardless of whether state legislatures take their lead from that commission, or from the bill’s sponsors themselves, we know where the road will end. Online service providers, even the smallest ones, will be compelled to scan user content, with government-approved software like PhotoDNA. If EARN IT supporters succeed in getting large platforms like Cloudflare and Amazon Web Services to scan, they might not even need to compel smaller websites — the government will already have access to the user data, through the platform. […] Senators supporting the EARN IT Act say they need new tools to prosecute cases over child sexual abuse material, or CSAM. But the methods proposed by EARN IT take aim at the security and privacy of everything hosted on the Internet.

The Senators supporting the bill have said that their mass surveillance plans are somehow magically compatible with end-to-end encryption. That’s completely false, no matter whether it’s called “client side scanning” or another misleading new phrase. The EARN IT Act doesn’t target Big Tech. It targets every individual internet user, treating us all as potential criminals who deserve to have every single message, photograph, and document scanned and checked against a government database. Since direct government surveillance would be blatantly unconstitutional and provoke public outrage, EARN IT uses tech companies — from the largest ones to the very smallest ones — as its tools. The strategy is to get private companies to do the dirty work of mass surveillance.

Source: It’s Back: Senators Want ‘EARN IT’ Bill To Scan All Online Messages – Slashdot

Revealed: UK Gov’t Plans Publicity Blitz to Undermine Chat Privacy, encryption. Of course they use children. And Fear.

The UK government is set to launch a multi-pronged publicity attack on end-to-end encryption, Rolling Stone has learned. One key objective: mobilizing public opinion against Facebook’s decision to encrypt its Messenger app.

The Home Office has hired the M&C Saatchi advertising agency — a spin-off of Saatchi and Saatchi, which made the “Labour Isn’t Working” election posters, among the most famous in UK political history — to plan the campaign, using public funds.

According to documents reviewed by Rolling Stone, one the activities considered as part of the publicity offensive is a striking stunt — placing an adult and child (both actors) in a glass box, with the adult looking “knowingly” at the child as the glass fades to black. Multiple sources confirmed the campaign was due to start this month, with privacy groups already planning a counter-campaign.

[…]

Successive Home Secretaries of different political parties have taken strong anti-encryption stances, claiming the technology — which is essential for online privacy and security — will diminish the effectiveness of UK bulk surveillance capabilities, make fighting organized crime more difficult, and hamper the ability to stop terror attacks. The American FBI has made similar arguments in recent years — claims which have been widely debunked by technologists and civil libertarians on both sides of the Atlantic.

The new campaign, however, is entirely focused on the argument that improved encryption would hamper efforts to tackle child exploitation online.

[…]

One key slide notes that “most of the public have never heard” of end-to-end encryption – adding that this means “people can be easily swayed” on the issue. The same slide notes that the campaign “must not start a privacy vs safety debate.”

Online advocates slammed the UK government plans as “scaremongering” that could put children and vulnerable adults at risk by undermining online privacy.

[…]

In response to a Freedom of Information request about an “upcoming ad campaign directed at Facebook’s end-to-end encryption proposal,” The Home Office disclosed that, “Under current plans, c.£534,000 is allocated for this campaign.”

[…]

Source: Revealed: UK Gov’t Plans Publicity Blitz to Undermine Chat Privacy – Rolling Stone

Is Microsoft Stealing People’s Bookmarks, passwords, ID / passport numbers without consent?

received email from two people who told me that Microsoft Edge enabled synching without warning or consent, which means that Microsoft sucked up all of their bookmarks. Of course they can turn synching off, but it’s too late.

Has this happened to anyone else, or was this user error of some sort? If this is real, can some reporter write about it?

(Not that “user error” is a good justification. Any system where making a simple mistake means that you’ve forever lost your privacy isn’t a good one. We see this same situation with sharing contact lists with apps on smartphones. Apps will repeatedly ask, and only need you to accidentally click “okay” once.)

EDITED TO ADD: It’s actually worse than I thought. Edge urges users to store passwords, ID numbers, and even passport numbers, all of which get uploaded to Microsoft by default when synch is enabled.

Source: Is Microsoft Stealing People’s Bookmarks? – Schneier on Security

Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did

Crisis Text Line, one of the nation’s largest nonprofit support options for the suicidal, is in some hot water. A Politico report last week highlighted how the company has been caught collecting and monetizing the data of callers… to create and market customer service software. More specifically, Crisis Text Line says it “anonymizes” some user and interaction data (ranging from the frequency certain words are used, to the type of distress users are experiencing) and sells it to a for-profit partner named Loris.ai. Crisis Text Line has a minority stake in Loris.ai, and gets a cut of their revenues in exchange.

As we’ve seen in countless privacy scandals before this one, the idea that this data is “anonymized” is once again held up as some kind of get out of jail free card:

“Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable.”

But as we’ve noted more times than I can count, “anonymized” is effectively a meaningless term in the privacy realm. Study after study after study has shown that it’s relatively trivial to identify a user’s “anonymized” footprint when that data is combined with a variety of other datasets. For a long time the press couldn’t be bothered to point this out, something that’s thankfully starting to change.

[…]

Source: Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did | Techdirt

Google adds new opt out tracking for Workspace Customers

[…]

according to a new FAQ posted on Google’s Workplace administrator forum. At the end of that month, the company will be adding a new feature—“Workspace search history”—that can continue to track these customers, even if they, or their admins, turn activity tracking off.

The worst part? Unlike Google’s activity trackers that are politely defaulted to “off” for all users, this new Workplace-specific feature will be defaulted to “on,” across Workspace apps like Gmail, Google Drive, Google Meet, and more.

[…]

Luckily, they can turn this option off if they want to, the same way they could turn off activity settings until now. According to Google, the option to do so will be right on the “My Activity” page once the feature goes live, right alongside the current options to flip off Google’s ability to keep tabs on their web activity, location history, and YouTube history. On this page, Google says the option to turn off Workspace history will be located on the far lefthand side, under the “Other Google Activity” tab.

[…]

Source: Google Makes Opting Out Harder for Workspace Customers

LG Announces New Ad Targeting Features for TVs – wait, wtf, I bought my TV, not a service!

[… ]

there are plenty of cases where you throw down hundreds of dollars for a piece of hardware and then you end up being the product anyway. Case in point: TVs.

On Wednesday, the television giant LG announced a new offering to advertisers that promises to be able to reach the company’s millions of connected devices in households across the country, pummeling TV viewers with—you guessed it—targeted ads. While ads playing on your connected TV might not be anything new, some of the metrics the company plans to hand over to advertisers include targeting viewers by specific demographics, for example, or being able to tie a TV ad view to someone’s in-store purchase down the line.

If you swap out a TV screen for a computer screen, the kind of microtargeting that LG’s offering doesn’t sound any different than what a company like Facebook or Google would offer. That’s kind of the point.

[…]

Aside from being an eyesore that literally no TV user wants, these ads come bundled with their own privacy issues, too. While the kinds of invasive tracking and targeting that regularly happens with the ads on your Facebook feed or Google search results are built off of more than a decade’s worth of infrastructure, those in the connected television (or so-called “CTV”) space are clearly catching up, and catching up fast. Aside from what LG’s offering, there are other players in adtech right now that offer ways to connect your in-app activity to what you watch on TV, or the billboards you walk by with what you watch on TV. For whatever reason, this sort of tech largely sidesteps the kinds of privacy snafus that regulators are trying to wrap their heads around right now—regulations like CPRA and GDPR are largely designed to handle your data is handled on the web, not on TV.

[…]

The good news is that you have some sort of refuge from this ad-ridden hell, though it does take a few extra steps. If you own a smart TV, you can simply not connect it to the internet and use another device—an ad-free set-top box like an Apple TV, for instance—to access apps. Sure, a smart TV is dead simple to use, but the privacy trade-offs might wind up being too great.

Source: LG Announces New Ad Targeting Features for TVs

How normal am I? – Let an AI judge you

This is an art project by Tijmen Schep that shows how face detection algoritms are increasingly used to judge you. It was made as part of the European Union’s Sherpa research program.

No personal data is sent to our server in any way. Nothing. Zilch. Nada. All the face detection algorithms will run on your own computer, in the browser.

In this ‘test’ your face is compared with that of all the other people who came before you. At the end of the show you can, if you want to, share some anonimized data. That will then be used to re-calculate the new average. That anonymous data is not shared any further.

Source: How normal am I?

How to Download Everything Amazon Knows About You (It’s a Lot)

[…]To be clear, data collection is far from an Amazon-specific problem; it’s pretty much par for the course when it comes to tech companies. Even Apple, a company vocal about user privacy, has faced criticism in the past for recording Siri interactions and sharing them with third-party contractors.

The issue with Amazon, however, is the extent to which they collect and archive your data. Just about everything you do on, with, and around an Amazon product or service is logged and recorded. Sure, you might not be surprised to learn that when you visit Amazon’s website, the company logs your browsing history and shopping data. But it goes far beyond that. Since Amazon owns Whole Foods, it also saves your shopping history there. When you watch video content through its platforms, it records all of that information, too.

Things get even creepier with other Amazon products. If you read books on a Kindle, Amazon records your reading activity, including the speed of your page turns (I wonder if Bezos prefers a slow or fast page flip); if you peered into your Amazon data, you might find something similar to what a Reuter’s reporter found: On Aug. 8 2020, someone on that account read The Mitchell Sisters: A Complete Romance Series from 4:52 p.m. through 7:36 p.m., completing 428 pages. (Nice sprint.)

If you have one of Amazon’s smart speakers, you’re on the record with everything you’ve ever uttered to the device: When you ask Alexa a question or give it a command, Amazon saves the audio files for the entire interaction. If you know how to access you data, you can listen to every one of those audio files, and relive moments you may or may not have realized were recorded.

Another Reuters reporter found Amazon saved over 90,000 recordings over a three-and-a-half-year period, which included the reporter’s children asking Alexa questions, recordings of those same children apologizing to their parents, and, in some cases, extended conversations that were outside the scope of a reasonable Alexa query.

Unfortunately, while you can access this data, Amazon doesn’t make it possible to delete much of it. You can tweak your privacy settings you stop your devices from recording quite as much information. However, once logged, the main strategy to delete it is to delete the entire account it is associated with. But even if you can’t delete the data while sticking with your account, you do have a right to see what data Amazon has on you, and it’s simple to request.

How to download all of your Amazon data

To start, , or go to Amazon’s Help page. You’ll find the link under Security and Privacy > More in Security & Privacy > Privacy > How Do I Request My Data? Once there, click the “Request My Data” link.

From the dropdown menu, choose the data you want from Amazon. If you want everything, choose “Request All Your Data.” Hit “Submit Request,” then click the validation link in your email. That’s it. Amazon makes it easy to see what the have on you, probably because they know you can’t do anything about it.

[Reuters]

Source: How to Download Everything Amazon Knows About You (It’s a Lot)