The Linkielist

Linking ideas with the world

The Linkielist

HBO hit with class action lawsuit for allegedly sharing subscriber data with Facebook

HBO is facing a class action lawsuit over allegations that it gave subscribers’ viewing history to Facebook without proper permission, Variety has reported. The suit accuses HBO of providing Facebook with customer lists, allowing the social network to match viewing habits with their profiles.

It further alleges that HBO knows Facebook can combine the data because HBO is a major Facebook advertiser — and Facebook can then use that information to retarget ads to its subscribers. Since HBO never received proper customer consent to do this, it allegedly violated the 1988 Video Privacy Protection Act (VPPA), according to the lawsuit.

HBO, like other sites, discloses to users that it (and partners) use cookies to deliver personalized ads. However, the VPPA requires separate consent from users to share their video viewing history. “A standard privacy policy will not suffice,” according to the suit.

Other streaming providers have been hit with similar claims, and TikTok recently agreed to pay a $92 million settlement for (in part) violating the VPPA. In another case, however, a judge ruled in 2015 that Hulu didn’t knowingly share data with Facebook that could establish an individual’s viewing history. The law firm involved in the HBO suit previously won a $50 million settlement with Hearst after alleging that it violated Michigan privacy laws by selling subscriber data.

Source: HBO hit with class action lawsuit for allegedly sharing subscriber data with Facebook | Engadget

Italy slaps creepy webscraping facial recognition firm Clearview AI with €20 million fine

Italy’s data privacy watchdog said it will fine the controversial facial recognition firm Clearview AI for breaching EU law. An investigation by Garante, Italy’s data protection authority, found that the company’s database of 10 billion images of faces includes those of Italians and residents in Italy. The New York City-based firm is being fined €20 million, and will also have to delete any facial biometrics it holds of Italian nationals.

This isn’t the first time that the beleaguered facial recognition tech company is facing legal consequences. The UK data protection authority last November fined the company £17 million after finding its practices—which include collecting selfies of people without their consent from security camera footage or mugshots—violate the nation’s data protection laws. The company has also been banned in Sweden, France and Australia.

The accumulated fines will be a considerable blow for the now five-year old company, completely wiping away the $30 million it raised in its last funding round. But Clearview AI appears to be just getting started. The company is on track to patent its biometric database, which scans faces across public internet data and has been used by law enforcement agencies around the world, including police departments in the United States and a number of federal agencies. A number of Democrats have urged federal agencies to drop their contracts with Clearview AI, claiming that the tool is a severe threat to the privacy of everyday citizens. In a letter to the Department of Homeland Security, Sens. Ed Markey and Jeff Merkley and Reps. Pramila Jayapal and Ayanna Pressley urged regulators to discontinue their use of the tool.

“Clearview AI reportedly scrapes billions of photos from social media sites without permission from or notice to the pictured individuals. In conjunction with the company’s facial recognition capabilities, this trove of personal information is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified,” wrote the authors of the letter.

Despite losing troves of facial recognition data from entire countries, Clearview AI has a plan to rapidly expand this year. The company told investors that it is on track to have 100 billion photos of faces in its database within a year, reported The Washington Post. In its pitch deck, the company said it hopes to secure an additional $50 million from investors to build even more facial recognition tools and ramp up its lobbying efforts.

Source: Italy slaps facial recognition firm Clearview AI with €20 million fine | Engadget

Ice Cream Machine Repairers Sue McDonald’s for $900 Million

For years, the tiny startup Kytch worked to invent and sell a device designed to fix McDonald’s notoriously broken ice cream machines, only to watch the fast food Goliath crush their business like the hopes of so many would-be McFlurry customers. Now Kytch is instead seeking to serve out cold revenge—nearly a billion dollars worth of it.

Late Tuesday night, Kytch filed a long-expected legal complaint against McDonald’s, accusing the company of false advertising and tortious interference in its contracts with customers. Kytch’s cofounders, Melissa Nelson and Jeremy O’Sullivan, are asking for no less than $900 million in damages.

Since 2019, Kytch has sold a phone-sized gadget designed to be installed inside McDonald’s ice cream machines. Those Kytch devices would intercept the ice cream machines’ internal communications and send them out to a web or smartphone interface to help owners remotely monitor and troubleshoot the machines’ many foibles, which are so widely acknowledged that they’ve become a full-blown meme among McDonald’s customers. The two-person startup’s new claims against McDonald’s focus on emails the fast food giant sent to every franchisee in November 2020, instructing them to pull Kytch devices out of their ice cream machines immediately.

Those emails warned franchisees that the Kytch devices not only violated the ice cream machines’ warranties and intercepted their “confidential information” but also posed a safety threat and could lead to “serious human injury,” a claim that Kytch describes as false and defamatory. Kytch also notes that McDonald’s used those emails to promote a new ice cream machine, built by its longtime appliance manufacturing partner Taylor, that would offer similar features to Kytch. The Taylor devices, meanwhile, have yet to see public adoption beyond a few test installations.

Kytch cofounder Melissa Nelson says the emails didn’t just result in McDonald’s ice cream machines remaining broken around the world. (About one in seven of the machines in the US remained out of commission on Monday according to McBroken.com, which tracks the problem in real time.) They also kneecapped Kytch’s fast-growing sales just as the startup was taking off. “They’ve tarnished our name. They scared off our customers and ruined our business. They were anti-competitive. They lied about a product that they said would be released,” Nelson says. “McDonald’s had every reason to know that Kytch was safe and didn’t have any issues. It was not dangerous, like they claimed. And so we’re suing them.”

Before it found itself in conflict with soft-serve superpowers, Kytch had shown some early success in solving McDonald’s ice cream headaches. Its internet-connected add-on gadget helped franchisees avoid problems like hours of downtime when Taylor’s finicky daily pasteurization cycle failed. McDonald’s restaurant owners interviewed by WIRED liked the device; one said it saved him “easily thousands of dollars a month” from lost revenue and repair fees. Kytch says that by the end of 2020 it had 500 customers and was doubling its sales every quarter—all of which evaporated when McDonald’s ordered its franchisees to ditch Kytch’s gadgets.

Kytch first fired back against the fast-food ice cream establishment last May, suing Taylor and its distributor TFG for theft of trade secrets. The Kytch founders argued in that lawsuit that Taylor worked with TFG and one franchise owner to stealthily obtain a Kytch device, reverse-engineer it, and attempt to copy its features.

But all along, Kytch’s cofounders have hinted that they intended to use the discovery process in their lawsuit against Taylor to dig up evidence for a suit against McDonald’s too. In fact, the 800 pages of internal Taylor emails and presentations that Kytch has so far obtained in discovery show that it was McDonald’s, not Taylor, that at many points led the effort to study and develop a response to Kytch in 2020.

[…]

Source: Ice Cream Machine Hackers Sue McDonald’s for $900 Million | WIRED

UK Online Safety Bill to require more data to use social media – eg send them your passport

The country’s forthcoming Online Safety Bill will require citizens to hand over even more personal data to largely foreign-headquartered social media platforms, government minister Nadine Dorries has declared.

“The vast majority of social networks used in the UK do not require people to share any personal details about themselves – they are able to identify themselves by a nickname, alias or other term not linked to a legal identity,” said Dorries, Secretary of State for Digital, Culture, Media and Sport (DCMS).

Another legal duty to be imposed on social media platforms will be a requirement to give users a “block” button, something that has been part of most of today’s platforms since their launch.

“When it comes to verifying identities,” said DCMS in a statement, “some platforms may choose to provide users with an option to verify their profile picture to ensure it is a true likeness. Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify.”

“Alternatively,” continued the statement, “verification could include people using a government-issued ID such as a passport to create or update an account.”

Two-factor authentication is a login technology to prevent account hijacking by malicious people, not a method of verifying a user’s government-approved identity.

“People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms,” said Dorries.

Social networks offering services to Britons don’t currently require lots of personal data to register as a user. Most people see this as a benefit; the government seems to see it as a negative.

Today’s statement had led to widespread concerns that DCMS will place UK residents at greater risk of online identity theft or of falling victim to a data breach.

The Online Safety Bill was renamed from the Online Harms Bill shortly before its formal introduction to Parliament. Widely accepted as a disaster in the making by the technically literate, critics have said the bill risks creating an “algorithm-driven censorship future” through new regulations that would make it legally risky for platforms not to proactively censor users’ posts.

It is also closely linked to strong rhetoric discouraging end-to-end encryption rollouts for the sake of “minors”, and its requirements would mean that tech platforms attempting to comply would have to weaken security measures.

Parliamentary efforts at properly scrutinising the draft bill then led to the “scrutineers” instead publishing a manifesto asking for even more stronger legal weapons be included.

[…]

Source: Online Safety Bill to require more data to use social media

EU Data Watchdog Calls for Total Ban of Pegasus Spyware

Israeli authorities say it should be probed and U.S. authorities are calling for it to be sanctioned, but EU officials have a different idea for how to handle Pegasus spyware: just ban that shit entirely.

That’s the main takeaway from a new memo released by EPDS, the Union’s dedicated data watchdog on Tuesday, noting that a full-on ban across the entire region is the only appropriate response to the “unprecedented risks” the tech poses—not only to people’s devices but “to democracy and the rule of law.”

“As the specific technical characteristics of spyware tools like Pegasus make control over their use very difficult, we have to rethink the entire existing system of safeguards established to protect our fundamental rights and freedoms,” the report reads. “Pegasus constitutes a paradigm shift in terms of access to private communications and devices. This fact makes its use incompatible with our democratic values.”

A “paradigm shift” is a good way to describe the tool, which has been used to target a mounting number of civic actors, activists, and political figures from around the globe, including some notable figures from inside the EU. This past summer, local outlets reported that French president Emmanuel Macron surfaced among the list of potential targets that foreign actors had planned to target with the software, and later reports revealed traces of the tech appearing on phones from Macron’s current staffers. Officials from other EU member states like Hungary and Spain have also reported the tech on their devices, and Poland became the latest member to join the list last month when a team of researchers found the spyware being used to surveil three outspoken critics of the Polish government.

[…]

Source: EU Data Watchdog Calls for Total Ban of Pegasus Spyware

100 Billion Face Photos? Clearview AI tells investors it’s On Track to Identify ‘Almost Everyone in the World’

tThe Washington Post reports: Clearview AI is telling investors it is on track to have 100 billion facial photos in its database within a year, enough to ensure “almost everyone in the world will be identifiable,” according to a financial presentation from December obtained by The Washington Post.

Those images — equivalent to 14 photos for each of the 7 billion people on Earth — would help power a surveillance system that has been used for arrests and criminal investigations by thousands of law enforcement and government agencies around the world. And the company wants to expand beyond scanning faces for the police, saying in the presentation that it could monitor “gig economy” workers and is researching a number of new technologies that could identify someone based on how they walk, detect their location from a photo or scan their fingerprints from afar.

The 55-page “pitch deck,” the contents of which have not been reported previously, reveals surprising details about how the company, whose work already is controversial, is positioning itself for a major expansion, funded in large part by government contracts and the taxpayers the system would be used to monitor. The document was made for fundraising purposes, and it is unclear how realistic its goals might be. The company said that its “index of faces” has grown from 3 billion images to more than 10 billion since early 2020 and that its data collection system now ingests 1.5 billion images a month.

With $50 million from investors, the company said, it could bulk up its data collection powers to 100 billion photos, build new products, expand its international sales team and pay more toward lobbying government policymakers to “develop favorable regulation.”
The article notes that major tech companies like Amazon, Google, IBM and Microsoft have all limited or ended their own sales of facial recognition technology — adding that Clearview’s presentation simple describes this as a major business opportunity for themselves.

In addition, the Post reports Clearview’s presentation brags “that its product is even more comprehensive than systems in use in China, because its ‘facial database’ is connected to ‘public source metadata’ and ‘social linkage’ information.”

Source: 100 Billion Face Photos? Clearview AI tells investors it’s On Track to Identify ‘Almost Everyone in the World’ – Slashdot

It’s Back: Senators Want ‘EARN IT’ Bill To Scan All Online Messages by private companies – also misusing children as an excuse

A group of lawmakers have re-introduced the EARN IT Act, an incredibly unpopular bill from 2020 that “would pave the way for a massive new surveillance system, run by private companies, that would roll back some of the most important privacy and security features in technology used by people around the globe,” writes Joe Mullin via the Electronic Frontier Foundation. “It’s a framework for private actors to scan every message sent online and report violations to law enforcement. And it might not stop there. The EARN IT Act could ensure that anything hosted online — backups, websites, cloud photos, and more — is scanned.” From the report: The bill empowers every U.S. state or territory to create sweeping new Internet regulations, by stripping away the critical legal protections for websites and apps that currently prevent such a free-for-all — specifically, Section 230. The states will be allowed to pass whatever type of law they want to hold private companies liable, as long as they somehow relate their new rules to online child abuse. The goal is to get states to pass laws that will punish companies when they deploy end-to-end encryption, or offer other encrypted services. This includes messaging services like WhatsApp, Signal, and iMessage, as well as web hosts like Amazon Web Services. […]

Separately, the bill creates a 19-person federal commission, dominated by law enforcement agencies, which will lay out voluntary “best practices” for attacking the problem of online child abuse. Regardless of whether state legislatures take their lead from that commission, or from the bill’s sponsors themselves, we know where the road will end. Online service providers, even the smallest ones, will be compelled to scan user content, with government-approved software like PhotoDNA. If EARN IT supporters succeed in getting large platforms like Cloudflare and Amazon Web Services to scan, they might not even need to compel smaller websites — the government will already have access to the user data, through the platform. […] Senators supporting the EARN IT Act say they need new tools to prosecute cases over child sexual abuse material, or CSAM. But the methods proposed by EARN IT take aim at the security and privacy of everything hosted on the Internet.

The Senators supporting the bill have said that their mass surveillance plans are somehow magically compatible with end-to-end encryption. That’s completely false, no matter whether it’s called “client side scanning” or another misleading new phrase. The EARN IT Act doesn’t target Big Tech. It targets every individual internet user, treating us all as potential criminals who deserve to have every single message, photograph, and document scanned and checked against a government database. Since direct government surveillance would be blatantly unconstitutional and provoke public outrage, EARN IT uses tech companies — from the largest ones to the very smallest ones — as its tools. The strategy is to get private companies to do the dirty work of mass surveillance.

Source: It’s Back: Senators Want ‘EARN IT’ Bill To Scan All Online Messages – Slashdot

Revealed: UK Gov’t Plans Publicity Blitz to Undermine Chat Privacy, encryption. Of course they use children. And Fear.

The UK government is set to launch a multi-pronged publicity attack on end-to-end encryption, Rolling Stone has learned. One key objective: mobilizing public opinion against Facebook’s decision to encrypt its Messenger app.

The Home Office has hired the M&C Saatchi advertising agency — a spin-off of Saatchi and Saatchi, which made the “Labour Isn’t Working” election posters, among the most famous in UK political history — to plan the campaign, using public funds.

According to documents reviewed by Rolling Stone, one the activities considered as part of the publicity offensive is a striking stunt — placing an adult and child (both actors) in a glass box, with the adult looking “knowingly” at the child as the glass fades to black. Multiple sources confirmed the campaign was due to start this month, with privacy groups already planning a counter-campaign.

[…]

Successive Home Secretaries of different political parties have taken strong anti-encryption stances, claiming the technology — which is essential for online privacy and security — will diminish the effectiveness of UK bulk surveillance capabilities, make fighting organized crime more difficult, and hamper the ability to stop terror attacks. The American FBI has made similar arguments in recent years — claims which have been widely debunked by technologists and civil libertarians on both sides of the Atlantic.

The new campaign, however, is entirely focused on the argument that improved encryption would hamper efforts to tackle child exploitation online.

[…]

One key slide notes that “most of the public have never heard” of end-to-end encryption – adding that this means “people can be easily swayed” on the issue. The same slide notes that the campaign “must not start a privacy vs safety debate.”

Online advocates slammed the UK government plans as “scaremongering” that could put children and vulnerable adults at risk by undermining online privacy.

[…]

In response to a Freedom of Information request about an “upcoming ad campaign directed at Facebook’s end-to-end encryption proposal,” The Home Office disclosed that, “Under current plans, c.£534,000 is allocated for this campaign.”

[…]

Source: Revealed: UK Gov’t Plans Publicity Blitz to Undermine Chat Privacy – Rolling Stone

Is Microsoft Stealing People’s Bookmarks, passwords, ID / passport numbers without consent?

received email from two people who told me that Microsoft Edge enabled synching without warning or consent, which means that Microsoft sucked up all of their bookmarks. Of course they can turn synching off, but it’s too late.

Has this happened to anyone else, or was this user error of some sort? If this is real, can some reporter write about it?

(Not that “user error” is a good justification. Any system where making a simple mistake means that you’ve forever lost your privacy isn’t a good one. We see this same situation with sharing contact lists with apps on smartphones. Apps will repeatedly ask, and only need you to accidentally click “okay” once.)

EDITED TO ADD: It’s actually worse than I thought. Edge urges users to store passwords, ID numbers, and even passport numbers, all of which get uploaded to Microsoft by default when synch is enabled.

Source: Is Microsoft Stealing People’s Bookmarks? – Schneier on Security

Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did

Crisis Text Line, one of the nation’s largest nonprofit support options for the suicidal, is in some hot water. A Politico report last week highlighted how the company has been caught collecting and monetizing the data of callers… to create and market customer service software. More specifically, Crisis Text Line says it “anonymizes” some user and interaction data (ranging from the frequency certain words are used, to the type of distress users are experiencing) and sells it to a for-profit partner named Loris.ai. Crisis Text Line has a minority stake in Loris.ai, and gets a cut of their revenues in exchange.

As we’ve seen in countless privacy scandals before this one, the idea that this data is “anonymized” is once again held up as some kind of get out of jail free card:

“Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable.”

But as we’ve noted more times than I can count, “anonymized” is effectively a meaningless term in the privacy realm. Study after study after study has shown that it’s relatively trivial to identify a user’s “anonymized” footprint when that data is combined with a variety of other datasets. For a long time the press couldn’t be bothered to point this out, something that’s thankfully starting to change.

[…]

Source: Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did | Techdirt

Google adds new opt out tracking for Workspace Customers

[…]

according to a new FAQ posted on Google’s Workplace administrator forum. At the end of that month, the company will be adding a new feature—“Workspace search history”—that can continue to track these customers, even if they, or their admins, turn activity tracking off.

The worst part? Unlike Google’s activity trackers that are politely defaulted to “off” for all users, this new Workplace-specific feature will be defaulted to “on,” across Workspace apps like Gmail, Google Drive, Google Meet, and more.

[…]

Luckily, they can turn this option off if they want to, the same way they could turn off activity settings until now. According to Google, the option to do so will be right on the “My Activity” page once the feature goes live, right alongside the current options to flip off Google’s ability to keep tabs on their web activity, location history, and YouTube history. On this page, Google says the option to turn off Workspace history will be located on the far lefthand side, under the “Other Google Activity” tab.

[…]

Source: Google Makes Opting Out Harder for Workspace Customers

LG Announces New Ad Targeting Features for TVs – wait, wtf, I bought my TV, not a service!

[… ]

there are plenty of cases where you throw down hundreds of dollars for a piece of hardware and then you end up being the product anyway. Case in point: TVs.

On Wednesday, the television giant LG announced a new offering to advertisers that promises to be able to reach the company’s millions of connected devices in households across the country, pummeling TV viewers with—you guessed it—targeted ads. While ads playing on your connected TV might not be anything new, some of the metrics the company plans to hand over to advertisers include targeting viewers by specific demographics, for example, or being able to tie a TV ad view to someone’s in-store purchase down the line.

If you swap out a TV screen for a computer screen, the kind of microtargeting that LG’s offering doesn’t sound any different than what a company like Facebook or Google would offer. That’s kind of the point.

[…]

Aside from being an eyesore that literally no TV user wants, these ads come bundled with their own privacy issues, too. While the kinds of invasive tracking and targeting that regularly happens with the ads on your Facebook feed or Google search results are built off of more than a decade’s worth of infrastructure, those in the connected television (or so-called “CTV”) space are clearly catching up, and catching up fast. Aside from what LG’s offering, there are other players in adtech right now that offer ways to connect your in-app activity to what you watch on TV, or the billboards you walk by with what you watch on TV. For whatever reason, this sort of tech largely sidesteps the kinds of privacy snafus that regulators are trying to wrap their heads around right now—regulations like CPRA and GDPR are largely designed to handle your data is handled on the web, not on TV.

[…]

The good news is that you have some sort of refuge from this ad-ridden hell, though it does take a few extra steps. If you own a smart TV, you can simply not connect it to the internet and use another device—an ad-free set-top box like an Apple TV, for instance—to access apps. Sure, a smart TV is dead simple to use, but the privacy trade-offs might wind up being too great.

Source: LG Announces New Ad Targeting Features for TVs

How normal am I? – Let an AI judge you

This is an art project by Tijmen Schep that shows how face detection algoritms are increasingly used to judge you. It was made as part of the European Union’s Sherpa research program.

No personal data is sent to our server in any way. Nothing. Zilch. Nada. All the face detection algorithms will run on your own computer, in the browser.

In this ‘test’ your face is compared with that of all the other people who came before you. At the end of the show you can, if you want to, share some anonimized data. That will then be used to re-calculate the new average. That anonymous data is not shared any further.

Source: How normal am I?

How to Download Everything Amazon Knows About You (It’s a Lot)

[…]To be clear, data collection is far from an Amazon-specific problem; it’s pretty much par for the course when it comes to tech companies. Even Apple, a company vocal about user privacy, has faced criticism in the past for recording Siri interactions and sharing them with third-party contractors.

The issue with Amazon, however, is the extent to which they collect and archive your data. Just about everything you do on, with, and around an Amazon product or service is logged and recorded. Sure, you might not be surprised to learn that when you visit Amazon’s website, the company logs your browsing history and shopping data. But it goes far beyond that. Since Amazon owns Whole Foods, it also saves your shopping history there. When you watch video content through its platforms, it records all of that information, too.

Things get even creepier with other Amazon products. If you read books on a Kindle, Amazon records your reading activity, including the speed of your page turns (I wonder if Bezos prefers a slow or fast page flip); if you peered into your Amazon data, you might find something similar to what a Reuter’s reporter found: On Aug. 8 2020, someone on that account read The Mitchell Sisters: A Complete Romance Series from 4:52 p.m. through 7:36 p.m., completing 428 pages. (Nice sprint.)

If you have one of Amazon’s smart speakers, you’re on the record with everything you’ve ever uttered to the device: When you ask Alexa a question or give it a command, Amazon saves the audio files for the entire interaction. If you know how to access you data, you can listen to every one of those audio files, and relive moments you may or may not have realized were recorded.

Another Reuters reporter found Amazon saved over 90,000 recordings over a three-and-a-half-year period, which included the reporter’s children asking Alexa questions, recordings of those same children apologizing to their parents, and, in some cases, extended conversations that were outside the scope of a reasonable Alexa query.

Unfortunately, while you can access this data, Amazon doesn’t make it possible to delete much of it. You can tweak your privacy settings you stop your devices from recording quite as much information. However, once logged, the main strategy to delete it is to delete the entire account it is associated with. But even if you can’t delete the data while sticking with your account, you do have a right to see what data Amazon has on you, and it’s simple to request.

How to download all of your Amazon data

To start, , or go to Amazon’s Help page. You’ll find the link under Security and Privacy > More in Security & Privacy > Privacy > How Do I Request My Data? Once there, click the “Request My Data” link.

From the dropdown menu, choose the data you want from Amazon. If you want everything, choose “Request All Your Data.” Hit “Submit Request,” then click the validation link in your email. That’s it. Amazon makes it easy to see what the have on you, probably because they know you can’t do anything about it.

[Reuters]

Source: How to Download Everything Amazon Knows About You (It’s a Lot)

The IEA wants to make their data available to the public – now it is on governments of the world’s rich countries to make this happen

To tackle climate change we need good data. This data exists; it is published by the International Energy Agency (IEA). But despite being an institution that is largely publicly funded, most IEA data is locked behind paywalls.

[…]

In 2020 we launched a campaign to unlock this data; we started on Twitter (one example), last year we wrote a detailed article about the problem here on OWID, and our letter in Nature.

[…]

The IEA has just announced that it aims to make all of its data and analysis freely available and open-access. This was put forward by the IEA’s executive director, Fatih Birol, and has been approved by its governing board already.

There is one step left. Next month – on February 2nd and 3rd – the IEA will ask for approval from its member countries. That means it is on the governments of the world’s rich countries to make this happen. If they do not approve it, it would be a missed opportunity to accelerate our action on addressing climate change.

This would be a massive achievement. The benefits of closing the small funding gap that remains greatly outweigh the costs.

There is now large support for the IEA data to be freely available – from researchers to journalists; policymakers to innovators. Many have called for the IEA data to be public.  Many thanks to everyone who has joined in pushing this forwards – below we share the links to several articles, petitions, and open letters that have made this possible.

Open letter to the International Energy Agency and its member countries: please remove paywalls from global energy data and add appropriate open licenses – by Robbie Morrison, Malte Schaefer and the OpenMod community

Energy watchdog urged to give free access to government data – Jillian Ambrose, in The Guardian

Opening up energy data is critical to battling climate change – Christa Hasenkopf, in Devex

Researchers are excited by ‘tantalising’ prospect of open IEA energy data – Joe Lo, in Climate Home

Open petition letter: Free IEA Data – A site by Skander Garroum and Christoph Proeschel on which you can write a letter to your country’s government.

[…]

Source: The IEA wants to make their data available to the public – now it is on governments of the world’s rich countries to make this happen – Our World in Data

Totally Bogus DMCA Takedowns From Giant Publishers Completely Nuke Book Review Blog Off The Internet

Just as we’re in the midst of a Greenhouse series all about SOPA, copyright industry lobbyists, and former copyright industry lawyers now running the Copyright Office are conspiring to make copyright law worse and to favor Hollywood and give the big giant legacy copyright companies more control and power over the internet.

And, yet, we pay almost no attention to how they massively abuse the power they already have under copyright law to silence people. The latest example is the book review blog, Fantasy Book Critic. I’d link to it, but as I’m writing this all you now see is a message that says “Sorry, the blog at fantasybookcritic.blogspot.com has been removed.”

Why? Because two of the largest publishing companies in the world, Penguin Random House and HarperCollins, hired a ridiculously incompetent service provider called “Link-Busters” which specializes in bullshit automated DMCA takedowns for the publishing industry. Link-Busters’ website looks like basically all of these sketchy, unreliable services, promising to “protect IP” and (even more ridiculously) “turn piracy into profits.”

[…]

On Monday, Link-Busters, on behalf of Penguin Random House and HarperCollins sent over 50 bullshit takedown notices to Google, claiming that various reviews on Fantasy Book Critic were actually infringing copies of the books they were reviewing. Each notice listed many, many blog posts on the site. This is just a small sample of four such notices.

The actual notices do contain some links to websites that appear to have pirated copies of some books available, but also lots of links to Fantasy Book Critic’s reviews. The whole thing just seems incredibly sloppy by Link-Busters. Some of the “allegedly infringing” books in some of these notices didn’t even include links to allegedly infringing pages.

And then some show the only allegedly “infringing” links being… Fantasy Book Critic’s reviews:

That link, which again, does not exist any more, can be seen on the Internet Archive where you see that not only is it clearly a review, and not piracy, but it directly links visitors to places where they can buy the book.

[…]

the real problem here is that there are no consequences whatsoever for Link-Busters or Penguin Random House or HarperCollins. While the DMCA has Section 512(f), which is supposed to punish false notifiers, in practice it is a dead letter. This means, Link-Busters can spam Google with wild abandon with blatantly false DMCA notices and face zero consequences. But, more importantly, publishing giants like Penguin Random House and HarperCollins (which are currently suing libraries for offering lendable ebooks), can get away with this abuse of the law over and over again.

Fantasy Book Critic was reduced to begging on Twitter for Google to look more closely at Link-Busters bogus notifications and to restore their blog. They even contacted Link-Busters which admitted that they fucked up (though, perhaps they should have checked before sending these bogus notices?)

[…]

Source: Totally Bogus DMCA Takedowns From Giant Publishers Completely Nuke Book Review Blog Off The Internet | Techdirt

WhatsApp Ordered To Help US Agents Spy On Chinese Phones using 1986 pen register act

U.S. federal agencies have been using a 35-year-old American surveillance law to secretly track WhatsApp users with no explanation as to why and without knowing whom they are targeting. In Ohio, a just-unsealed government surveillance application reveals that in November 2021, DEA investigators demanded the Facebook-owned messaging company track seven users based in China and Macau. The application reveals the DEA didn’t know the identities of any of the targets, but told WhatsApp to monitor the IP addresses and numbers with which the targeted users were communicating, as well as when and how they were using the app. Such surveillance is done using a technology known as a pen register and under the 1986 Pen Register Act, and doesn’t seek any message content, which WhatsApp couldn’t provide anyway, as it is end-to-end encrypted.

As Forbes previously reported, over at least the last two years, law enforcement in the U.S. has repeatedly ordered WhatsApp and other tech companies to install these pen registers without showing any probable cause. As in those previous cases, the government order to trace Chinese users came with the statement that the Justice Department only needed to provide three “elements” to justify tracking of WhatsApp users. They include: the identity of the attorney or the law enforcement officer making the application; the identity of the agency making the application; and a certification from the applicant that “the information likely to be obtained is relevant to an ongoing criminal investigation being conducted by that agency.” “Other than the three elements described above, federal law does not require that an application for an order authorizing the installation and use of a pen register and a trap and trace device specify any facts,” the government wrote in the latest application.

Source: WhatsApp Ordered To Help US Agents Spy On Chinese Phones – Slashdot

Canon can’t get enough toner chips, so it’s telling customers how to defeat its DRM

[.,..]To enforce the use of first-party cartridges, manufacturers typically embed chips inside the consumables for the printers to “authenticate.” But when chips are in short supply, like today, manufacturers can find themselves in a bind. So Canon is now telling German customers how to defeat its printers’ warnings about third-party cartridges.

“Due to the worldwide continuing shortage of semiconductor components, Canon is currently facing challenges in procuring certain electronic components that are used in our consumables for our multifunction printers (MFP),” a Canon support website says in German. “In order to ensure a continuous and reliable supply of consumables, we have decided to supply consumables without a semiconductor component until the normal supply takes place again.”

[…]

The software on these printers comes with a relatively simple way to defeat the chip checks. Depending on the model, when an error message occurs after inserting toner, users can press either “I Agree,” “Close,” or “OK.” When users press that button, the world does not end. Rather, Canon says users may find that their toner cartridge doesn’t give them a low-toner warning before running empty.

“Although there are no negative effects on print quality when consumables are used without electronic components, certain additional functions, such as the detection of the toner level, may be impaired,” Canon’s support site says.

Source: Canon can’t get enough toner chips, so it’s telling customers how to defeat its DRM | Ars Technica

Facebook Pixel Hunt – Mozilla Rally want to track the trackers

In a collaboration between journalists at The Markup and Mozilla researchers, this study seeks to map Facebook’s pixel tracking network and understand the kinds of information it collects on sites across the web. The Markup will use the data collected in this study to create investigative journalism around the kinds of information Facebook collects about you, and where.

The study will run until July 13, 2022.

Goals of the Study

According to its own privacy policy, Facebook may collect information about you across the web even if you don’t have a Facebook account. One way Facebook performs this tracking is through a network of “pixels” that may be installed on many of the sites you visit. By joining this study, you will help Rally and The Markup investigate and report on where Facebook is tracking you and what kind of information they are collecting.

This Study Will Collect:

This Study will Collect:

  • The data sent to Facebook pixels as you browse
  • The URLs of the web pages you browse
  • The time you spend browsing pages
  • The presence of Facebook login cookies in your browser
  • A study survey that the user completes
  • Metadata on the URLs your visit:
    • The full URL of each webpage that you are on
    • Time spent browsing and playing media on each webpage
    • How far down the webpage you scrolled

In addition, your Rally demographics survey responses will be combined with study data for the analysis.

Note: Only deidentified metrics and models will be exported from our secure environment. For additional information about our data collection, view our metrics definition file in our open source codebase.

Source: Facebook Pixel Hunt

John Deere Hit With Class Action Lawsuit for Alleged Tractor Repair Monopoly

A class action lawsuit filed in Chicago has accused John Deere of running an illegal repair monopoly. The lawsuit alleged that John Deere has used software locks and restricted access to repair documentation and tools, making it very difficult for farmers to fix their own agricultural equipment, a problem that Motherboard has documented for years and that lawmakers, the FTC, and even the Biden administration have acknowledged.

[…]

The situation is so bad that it’s created a boom in the secondary market. Used tractors are selling for hundreds of thousands of dollars, in part, because they’re easier to repair than modern machines.

Forest River Farms, a farming corporation in North Dakota, filed the recent antitrust lawsuit against John Deere, alleging that “Deere’s network of highly-consolidated independent dealerships is not permitted through their agreements with Deere to provide farmers or repair shops with access to the same software and repair tools the Dealerships have.”

[…]

Last year, President Biden signed an executive order aimed at making it easier for everyone to fix their own stuff. He also directed the FTC to formally adopt a pro right-to-repair platform. Legislation has been introduced in congress that would enshrine the right-to-repair and similar laws are working their way through various statehouses across the country. Microsoft’s shareholders have pressed the company to do more for repair and even Apple is backing away from its monopolistic repair practices.

[…]

Source: John Deere Hit With Class Action Lawsuit for Alleged Tractor Repair Monopoly

German IT security watchdog: No evidence of censorship function in Xiaomi phones

Germany’s federal cybersecurity watchdog, the BSI, did not find any evidence of censorship functions in mobile phones manufactured by China’s Xiaomi Corp (1810.HK), a spokesperson said on Thursday.

Lithuania’s state cybersecurity body had said in September that Xiaomi phones had a built-in ability to detect and censor terms such as “Free Tibet”, “Long live Taiwan independence” or “democracy movement”. The BSI started an examination following these accusations, which lasted several months. read more

“As a result, the BSI was unable to identify any anomalies that would require further investigation or other measures,” the BSI spokesperson said.

Source: German IT security watchdog: No evidence of censorship function in Xiaomi phones | Reuters

Google’s and Facebook’s top execs accused of fixing ads

The alleged 2017 deal between Google and Facebook to kill header bidding, a way for multiple ad exchanges to compete fairly in automated ad auctions, was negotiated by Facebook COO Sheryl Sandberg, and endorsed by both Facebook CEO Mark Zuckerberg (now with Meta) and Google CEO Sundar Pichai, according to an updated complaint filed in the Texas-led antitrust lawsuit against Google.

Texas, 14 other US states, and the Commonwealths of Kentucky and Puerto Rico accused Google of unlawfully monopolizing the online ad market and rigging ad auctions in a December, 2020, lawsuit. The plaintiffs subsequently filed an amendment complaint in October, 2021, that includes details previously redacted.

On Friday, Texas et al. filed a third amended complaint [PDF] that fills in more blanks and expands the allegations by 69 more pages.

The fortified filing adds additional information about previous revelations and extends the scope of concern to cover in-app advertising in greater detail.

Presently, there are three other US government-backed unfair competition claims against Google ongoing: a federal antitrust lawsuit from the US Justice Department, a challenge from Colorado and 38 other State Attorneys General (filed around the same time as the Texas-led complaint), as well as a competition claim focused on Android and the Google Play Store filed last July.

The third amendment complaint delves into more detail about how Google allegedly worked “to kill header bidding,”

[]…]

The deal, referred to as “Jedi Blue” internally and eventually as “Open Bidding” when discussed publicly, allegedly allowed Facebook to win ad auctions even when outbid by competitors.

The third amended complaint explains, “Facebook’s Chief Operating Officer [REDACTED] was explicit that ‘[t]his is a big deal strategically’ in an email thread that included Facebook CEO [REDACTED].

[…]

The expanded filing includes new allegations about how Google used Accelerated Mobile Pages to hinder header bidding.

Google first created Accelerated Mobile Pages (“AMP”), a framework for developing mobile webpages, and made AMP compatible with Google’s ad server but substantially hindered compatibility with header bidding. Specifically, Google made AMP unable to execute JavaScript in the header, which frustrated publishers’ use of header bidding.

[…]

What’s more, the revised filing adds support for the claim that a Google ad program called Dynamic Revenue Share or DRS cheated to help Google win more valuable ad impressions.

“DRS manipulated Google’s exchange fee after soliciting bids in the auction and after peeking at rival exchanges’ bids to win impressions it would have otherwise lost,” the revised complaint says.

And the complaint now contends that Google personnel admitted the unfairness of the DRS system: “Google internally acknowledged that DRS made its auction untruthful: ‘One known issue with the current DRS is that it makes the auction untruthful as we determine the AdX revshare after seeing buyers’ bids and use winner’s bid to price itself (first-pricing)….'”

[…]

Source: Google’s and Facebook’s top execs accused of fixing ads • The Register

Google and Facebook Fined Big in Russia for Failing to Remove Banned Content – imprisonment threats follow forcing local data storage

A Russian court fined Alphabet Inc.’s Google 7.2 billion rubles ($98 million) and Meta Platforms Inc. 2 billion rubles Friday for failing to remove banned content, the largest such penalties yet, as the authorities escalate a crackdown on foreign technology companies.

The fines were due to the companies’ repeated failure to comply with orders to take down content and based on a percentage of their annual earnings in Russia, the federal communications watchdog said in a statement. Google and Meta could face more fines if they don’t remove the material, it said.

[…]

The government is also pushing tech companies to comply with its increasingly strict laws on localizing data storage. This year, Google and Apple Inc. removed a protest-voting app from their Russian stores during parliamentary elections after the authorities threatened to imprison their local staff.

Until the latest rulings, however, fines for failure to remove content were generally insignificant. In September, Russia’s federal communications watchdog said companies that did not delete content could face fines of 5% to 20% of their annual local revenue.

Google earned revenues in Russia of about 85 billion rubles in 2020, according to the Spark-Interfax database.

“For some reason, the company fulfills decisions of American and European courts unquestioningly,” Anton Gorelkin, a ruling party deputy in the lower house of parliament who sits on the Information Policy committee, wrote on Telegram after the Google ruling was announced Friday. “If the turnover fine doesn’t bring Google to its senses, I’m afraid that some very unpleasant measures will be taken.”

[…]

Source: Google in Russia Fined $98 Million for Failing to Remove Banned Content – Bloomberg

Snap suing to trademark the word “spectacles” for its smart glasses that no one has ever used or knows much about

Snap is suing the US Patent and Trademark Office (USPTO) for rejecting its application to trademark the word “spectacles” for its digital eyewear camera device. But the USPTO has maintained that “spectacles” is a generic term for smart glasses and that Snap’s version “has not acquired distinctiveness,” as required for a trademark.

In its complaint filed Wednesday in US District Court in California, Snap claims that the Spectacles name “evokes an incongruity between an 18th century term for corrective eyewear and Snap’s high-tech 21st century smart glasses. SPECTACLES also is suggestive of the camera’s purpose, to capture and share unusual, notable, or entertaining scenes (i.e., “spectacles”) and while also encouraging users to make ‘spectacles’ of themselves.”

Snap first introduced its camera-equipped Spectacles in 2016 (“a wearable digital video camera housed in a pair of fashionable sunglasses,” according to its complaint), which can take photos and videos while the user wears them and connects with the Snap smartphone app. Despite selling them both online and in pop-up vending machines around the world, the first iteration of Spectacles mostly flopped with consumers. In its 2017 third-quarter earnings report, Snap said it had lost nearly $40 million on some 300,000 unsold Spectacles.

In May 2021, Snap CEO Evan Spiegel showed off an augmented reality version of the Spectacles, which so far are available only to a small group of creators and reviewers chosen by the company. The AR Spectacles aren’t yet available for purchase by the general public.

Snap’s new complaint posits that there’s been enough media coverage of Spectacles, bolstered by some industry awards and its own marketing including social media, to support its claim that consumers associate the word “spectacles” with the Snap brand. Snap first filed a trademark application for Spectacles in September 2016, “for use in connection with wearable computer hardware” and other related uses “among consumer electronics devices and displays.”

During several rounds of back-and-forth with the company since then, the USPTO has maintained that the word “spectacles” appeared to be “generic in connection with the identified goods,” i.e. the camera glasses. Snap continued to appeal the agency’s decision.

In a November 2021 opinion, the USPTO’s Trademark Trial and Appeal Board (pdf) upheld the decision, reiterating that the word “spectacles” was a generic term that applied to all smart glasses, not just Snap’s version. Despite the publicity Snap claimed its Spectacles had received from its marketing and social media, the board noted in its opinion that Spectacles’ “social media accounts have an underwhelming number of followers, and the number of followers is surprisingly small,” which didn’t support the company’s argument that there had been a high enough level of consumer exposure to Snap’s Spectacles to claim that consumers associated the word with Snap’s brand.

[…]

Source: Snap suing to trademark the word “spectacles” for its smart glasses

This App Will Tell Android Users If an AirTag Is Tracking Them

Apple’s AirTags and Find My service can be helpful for finding things you lose—but they also introduce a big privacy problem. While those of us on iOS have had some tools for fighting those issues, Apple left those of us on Android without much to work with. A new Android AirTag finder app finally addresses some of those concerns.

How AirTags work

[…]

The Find My network employs the passive use of hundreds of millions of Apple devices to help expand your search. That way, you can locate your lost items even if they’re too far away for traditional wireless tracking. Your lost AirTag may be out of your own phone’s Bluetooth range, but it may not be far from another Apple device.

[…]

The Tracker Detect app comes out of a need for better security in the Find My network. Having such a wide network to track a tiny, easy-to-miss device could make it easy for someone to use AirTags to track someone.

People pointed out this vulnerability pretty soon after Apple announced the AirTags. With more than 113 million iPhones in the U.S., not to mention other Apple devices, the Find My network could be one of the widest tracking systems available. A device as small and easy-to-use as an AirTag on that network could make stalking easier than ever.

That said, Apple has a built-in feature designed to prevent tracking. If your iPhone senses that a strange AirTag, separated from its owner, is following you, it will send you an alert. If that AirTag is not found, it will start to make a sound anywhere from 8 to 24 hours after being separated from its owner.

However, Android users haven’t had these protections. That’s where Tracker Detect comes in; with this new Android AirTag app, you can scan the area to see if anyone may be tracking your location with an AirTag or other Find My-enabled accessory.

How to use Tracker Detect

If you’re concerned about people tracking you, download the Tracker Detect app from the Google Play Store. You don’t need an Apple account or any Apple devices to use it.

The app won’t scan automatically, so you’ll have to look for devices manually. To do that, open the app and tap Scan. Apple says it may take up to 15 minutes to find an AirTag that’s separated from its owner. You can tap Stop Scanning to end the search if you feel safe, and if the app detects something, it will mark it as Unknown AirTag.

Once the app has detected an AirTag, you can have it play a sound through the tag for up to ten minutes to help you find it. When you find the AirTag, you can scan it with an NFC reader to learn more about it.

[…]

 

Source: This App Will Tell Android Users If an AirTag Is Tracking Them