Google’s Privacy Settings Finally Won’t Break It’s Apps Anymore, require using My Ad Center

[…] It used to be that the only way to prevent Google from using your data for targeted ads was turning off personalized ads across your whole account, or disabling specific kinds of data using a couple of settings, including Web & App Activity and YouTube History. Those two settings control whether Google collects certain details about what you do on its platform (you can see some of that data here). Turning off the controls meant Google wouldn’t use the data for ads, but it disabled some of the most useful features on services such as Maps, Search, and Google Assistant.

Thanks to a new set of controls, that’s no longer true. You can now leave Web & App Activity and YouTube History on, but drill into to adjust more specific settings to tell Google you don’t want the related data used for targeted ads.

The detail is tucked into an announcement about the rollout of a new hub for Google’s advertising settings called My Ad Center. “You can decide what types of your Google activity are used to show you ads, without impacting your experience with the utility of the product,” Jerry Dischler, vice president of ads at Google, wrote in a blog post.

That’s a major step in the direction of what experts call “usable privacy,” or data protection that’s easy to manage without breaking other parts of the internet.

[…]

You’ll find the new controls in My Ad Center, which starts rolling out to users this week. It primarily serves as a hub for Google’s existing ad controls, but you’ll find some expanded options, new tools, and a number of other updates.

When you open My Ad Center, you’ll be able to fine tune whether you see ads related to certain subjects or advertisers. […] You’ll also be able to view ads and advertisers that you’ve seen recently, and see all the ads that specific advertisers have run over the last thirty days.

Google also includes a way to toggle off ads on sensitive subjects such as alcohol, parenting, and weight loss. Unlike similar settings on Facebook and Instagram, though, you can’t tell Google you don’t want to see ads about politics.

Source: Google’s Privacy Settings Finally Won’t Break It’s Apps Anymore

So you probably need to spend quite some time configuring this – we will see, but most importantly you are now directly telling Google what you do and don’t like (and what you don’t like tells them about what you do like) without them having to feed your search behaviour through an algorithm and making them guess at how to best /– mind control –/ sell ads to you

Texas sues Google for allegedly capturing biometric data of millions without consent

Texas has filed a lawsuit against Alphabet’s (GOOGL.O) Google for allegedly collecting biometric data of millions of Texans without obtaining proper consent, the attorney general’s office said in a statement on Thursday.

The complaint says that companies operating in Texas have been barred for more than a decade from collecting people’s faces, voices or other biometric data without advanced, informed consent.

“In blatant defiance of that law, Google has, since at least 2015, collected biometric data from innumerable Texans and used their faces and their voices to serve Google’s commercial ends,” the complaint said. “Indeed, all across the state, everyday Texans have become unwitting cash cows being milked by Google for profits.”

The collection occurred through products like Google Photos, Google Assistant, and Nest Hub Max, the statement said.

[…]

Source: Texas sues Google for allegedly capturing biometric data of millions without consent | Reuters

Ring Cameras Are Being Used To Control and Surveil Overworked Delivery Workers

Networked doorbell surveillance cameras like Amazon’s Ring are everywhere, and have changed the nature of delivery work by letting customers take on the role of bosses to monitor, control, and discipline workers, according to a recent report (PDF) by the Data & Society tech research institute. “The growing popularity of Ring and other networked doorbell cameras has normalized home and neighborhood surveillance in the name of safety and security,” Data & Society’s Labor Futures program director Aiha Nguyen and research analyst Eve Zelickson write. “But for delivery drivers, this has meant their work is increasingly surveilled by the doorbell cameras and supervised by customers. The result is a collision between the American ideas of private property and the business imperatives of doing a job.”

Thanks to interviews with surveillance camera users and delivery drivers, the researchers are able to dive into a few major developments interacting here to bring this to a head. Obviously, the first one is the widespread adoption of doorbell surveillance cameras like Ring. Just as important as the adoption of these cameras, however, is the rise of delivery work and its transformation into gig labor. […] As the report lays out, Ring cameras allow customers to surveil delivery workers and discipline their labor by, for example, sharing shaming footage online. This dovetails with the “gigification” of Amazon’s delivery workers in two ways: labor dynamics and customer behavior.

“Gig workers, including Flex drivers, are sold on the promise of flexibility, independence and freedom. Amazon tells Flex drivers that they have complete control over their schedule, and can work on their terms and in their space,” Nguyen and Zelickson write. “Through interviews with Flex drivers, it became apparent that these marketed perks have hidden costs: drivers often have to compete for shifts, spend hours trying to get reimbursed for lost wages, pay for wear and tear on their vehicle, and have no control over where they work.” That competition between workers manifests in other ways too, namely acquiescing to and complying with customer demands when delivering purchases to their homes. Even without cameras, customers have made onerous demands of Flex drivers even as the drivers are pressed to meet unrealistic and dangerous routes alongside unsafe and demanding productivity quotas. The introduction of surveillance cameras at the delivery destination, however, adds another level of surveillance to the gigification. […] The report’s conclusion is clear: Amazon has deputized its customers and made them partners in a scheme that encourages antagonistic social relations, undermines labor rights, and provides cover for a march towards increasingly ambitious monopolistic exploits. As Nguyen and Zelickson point out, it is ingenious how Amazon has “managed to transform what was once a labor cost (i.e., supervising work and asset protection) into a revenue stream through the sale of doorbell cameras and subscription services to residents who then perform the labor of securing their own doorstep.”

Source: Ring Cameras Are Being Used To Control and Surveil Overworked Delivery Workers – Slashdot

TikTok joins Uber, Facebook in Monitoring The Physical Location Of Specific American Citizens

The team behind the monitoring project — ByteDance’s Internal Audit and Risk Control department — is led by Beijing-based executive Song Ye, who reports to ByteDance cofounder and CEO Rubo Liang.

The team primarily conducts investigations into potential misconduct by current and former ByteDance employees. But in at least two cases, the Internal Audit team also planned to collect TikTok data about the location of a U.S. citizen who had never had an employment relationship with the company, the materials show. It is unclear from the materials whether data about these Americans was actually collected; however, the plan was for a Beijing-based ByteDance team to obtain location data from U.S. users’ devices.

[…]

material reviewed by Forbes indicates that ByteDance’s Internal Audit team was planning to use this location information to surveil individual American citizens, not to target ads or any of these other purposes. Forbes is not disclosing the nature and purpose of the planned surveillance referenced in the materials in order to protect sources.

[…]

The Internal Audit and Risk Control team runs regular audits and investigations of TikTok and ByteDance employees, for infractions like conflicts of interest and misuse of company resources, and also for leaks of confidential information. Internal materials reviewed by Forbes show that senior executives, including TikTok CEO Shou Zi Chew, have ordered the team to investigate individual employees, and that it has investigated employees even after they left the company.

[…]

ByteDance is not the first tech giant to have considered using an app to monitor specific U.S. users. In 2017, the New York Times reported that Uber had identified various local politicians and regulators and served them a separate, misleading version of the Uber app to avoid regulatory penalties. At the time, Uber acknowledged that it had run the program, called “greyball,” but said it was used to deny ride requests to “opponents who collude with officials on secret ‘stings’ meant to entrap drivers,” among other groups.

[…]

Both Uber and Facebook also reportedly tracked the location of journalists reporting on their apps. A 2015 investigation by the Electronic Privacy Information Center found that Uber had monitored the location of journalists covering the company. Uber did not specifically respond to this claim. The 2021 book An Ugly Truth alleges that Facebook did the same thing, in an effort to identify the journalists’ sources. Facebook did not respond directly to the assertions in the book, but a spokesperson told the San Jose Mercury News in 2018 that, like other companies, Facebook “routinely use[s] business records in workplace investigations.”

[…]

https://www.forbes.com/sites/emilybaker-white/2022/10/20/tiktok-bytedance-surveillance-american-user-data/

So a bit of anti China stirring, although it’s pretty sad that nowadays this kind of surveillance by tech companies has been normalised by the us govt refusing to punish it

iOS 16 VPN Tunnels Leak Data, Even When Lockdown Mode Is Enabled

AmiMoJo shares a report from MacRumors: iOS 16 continues to leak data outside an active VPN tunnel, even when Lockdown mode is enabled, security researchers have discovered. Speaking to MacRumors, security researchers Tommy Mysk and Talal Haj Bakry explained that iOS 16’s approach to VPN traffic is the same whether Lockdown mode is enabled or not. The news is significant since iOS has a persistent, unresolved issue with leaking data outside an active VPN tunnel.

According to a report from privacy company Proton, an iOS VPN bypass vulnerability had been identified in iOS 13.3.1, which persisted through three subsequent updates. Apple indicated it would add Kill Switch functionality in a future software update that would allow developers to block all existing connections if a VPN tunnel is lost, but this functionality does not appear to prevent data leaks as of iOS 15 and iOS 16. Mysk and Bakry have now discovered that iOS 16 communicates with select Apple services outside an active VPN tunnel and leaks DNS requests without the user’s knowledge.

Mysk and Bakry also investigated whether iOS 16’s Lockdown mode takes the necessary steps to fix this issue and funnel all traffic through a VPN when one is enabled, and it appears that the exact same issue persists whether Lockdown mode is enabled or not, particularly with push notifications. This means that the minority of users who are vulnerable to a cyberattack and need to enable Lockdown mode are equally at risk of data leaks outside their active VPN tunnel. […] Due to the fact that iOS 16 leaks data outside the VPN tunnel even where Lockdown mode is enabled, internet service providers, governments, and other organizations may be able to identify users who have a large amount of traffic, potentially highlighting influential individuals. It is possible that Apple does not want a potentially malicious VPN app to collect some kinds of traffic, but seeing as ISPs and governments are then able to do this, even if that is what the user is specifically trying to avoid, it seems likely that this is part of the same VPN problem that affects iOS 16 as a whole

https://m.slashdot.org/story/405931

Shein Owner Fined $1.9 Million For Failing To Notify 39 Million Users of Data Breach – Slashdot

Zoetop, the firm that owns Shein and its sister brand Romwe, has been fined (PDF) $1.9 million by New York for failing to properly disclose a data breach from 2018.

TechCrunch reports: A cybersecurity attack that originated in 2018 resulted in the theft of 39 million Shein account credentials, including those of more than 375,000 New York residents, according to the AG’s announcement. An investigation by the AG’s office found that Zoetop only contacted “a fraction” of the 39 million compromised accounts, and for the vast majority of the users impacted, the firm failed to even alert them that their login credentials had been stolen. The AG’s office also concluded that Zoetop’s public statements about the data breach were misleading. In one instance, the firm falsely stated that only 6.42 million consumers had been impacted and that it was in the process of informing all the impacted users.

https://m.slashdot.org/story/405939

Meta’s New $1499 Headset Will Track Your Eyes for Targeted Ads

Earlier this week, Meta revealed the Meta Quest Pro, the company’s most premium virtual reality headset to date with a new processor and screen, dramatically redesigned body and controllers, and inward-facing cameras for eye and face tracking. “To celebrate the $1,500 headset, Meta made some fun new additions to its privacy policy, including one titled ‘Eye Tracking Privacy Notice,'” reports Gizmodo. “The company says it will use eye-tracking data to ‘help Meta personalize your experiences and improve Meta Quest.’ The policy doesn’t literally say the company will use the data for marketing, but ‘personalizing your experience’ is typical privacy-policy speak for targeted ads.”

From the report: Eye tracking data could be used “in order to understand whether people engage with an advertisement or not,” said Meta’s head of global affair Nick Clegg in an interview with the Financial Times. Whether you’re resigned to targeted ads or not, this technology takes data collection to a place we’ve never seen. The Quest Pro isn’t just going to inform Meta about what you say you’re interested in, tracking your eyes and face will give the company unprecedented insight about your emotions. “We know that this kind of information can be used to determine what people are feeling, especially emotions like happiness or anxiety,” said Ray Walsh, a digital privacy researcher at ProPrivacy. “When you can literally see a person look at an ad for a watch, glance for ten seconds, smile, and ponder whether they can afford it, that’s providing more information than ever before.”

[…]

https://m.slashdot.org/story/405885

Android Leaks Some Traffic Even When ‘Always-On VPN’ Is Enabled – Slashdot

Mullvad VPN has discovered that Android leaks traffic every time the device connects to a WiFi network, even if the “Block connections without VPN,” or “Always-on VPN,” features is enabled. BleepingComputer reports: The data being leaked outside VPN tunnels includes source IP addresses, DNS lookups, HTTPS traffic, and likely also NTP traffic. This behavior is built into the Android operating system and is a design choice. However, Android users likely didn’t know this until now due to the inaccurate description of the “VPN Lockdown” features in Android’s documentation. Mullvad discovered the issue during a security audit that hasn’t been published yet, issuing a warning yesterday to raise awareness on the matter and apply additional pressure on Google.

Android offers a setting under “Network & Internet” to block network connections unless you’re using a VPN. This feature is designed to prevent accidental leaks of the user’s actual IP address if the VPN connection is interrupted or drops suddenly. Unfortunately, this feature is undercut by the need to accommodate special cases like identifying captive portals (like hotel WiFi) that must be checked before the user can log in or when using split-tunnel features. This is why Android is configured to leak some data upon connecting to a new WiFi network, regardless of whether you enabled the “Block connections without VPN” setting.

Mullvad reported the issue to Google, requesting the addition of an option to disable connectivity checks. “This is a feature request for adding the option to disable connectivity checks while “Block connections without VPN” (from now on lockdown) is enabled for a VPN app,” explains Mullvad in a feature request on Google’s Issue Tracker. “This option should be added as the current VPN lockdown behavior is to leaks connectivity check traffic (see this issue for incorrect documentation) which is not expected and might impact user privacy.” In response to Mullvad’s request, a Google engineer said this is the intended functionality and that it would not be fixed for the following reasons:

– Many VPNs actually rely on the results of these connectivity checks to function,
– The checks are neither the only nor the riskiest exemptions from VPN connections,
– The privacy impact is minimal, if not insignificant, because the leaked information is already available from the L2 connection.

Mullvad countered these points and the case remains open.

https://m.slashdot.org/story/405837

Judge Ruling That YouTube Ripping Tool May Violate Copyright Law goes nuts on argumentation

There are a number of different tools out there that let you download YouTube videos. These tools are incredibly useful for a number of reasons and should be seen as obviously legal in the same manner that home video recording devices were declared legal by the Supreme Court, because they have substantial non-infringing uses. But, of course, we’re in the digital age, and everything that should be obviously settled law is up for grabs again, because “Internet.”

In this case, a company named Yout offered a service for downloading YouTube video and audio, and the RIAA (because, they’re the RIAA) couldn’t allow that to happen. Home taping is killing music, y’know. Rather than going directly after Yout, the RIAA sent angry letters to lots of different companies that Yout relied on to exist. It got Yout’s website delisted from Google, had its payment processor cut the company off, etc. Yout was annoyed by this and filed a lawsuit against the RIAA.

The crux of the lawsuit is “Hey, we don’t infringe on anything,” asking for declaratory judgment. But it also seeks to go after the RIAA for DMCA 512(f) (false takedown notices) abuse and defamation (for the claims it made in the takedown notices it sent). All of these were going to be a longshot, and so it probably isn’t a huge surprise that the ruling was a complete loser for Yout (first posted to TorrentFreak).

But, in reading through the ruling there are things to be concerned about, beyond just the ridiculousness of saying that a digital VCR isn’t protected in the same way that a physical one absolutely is.

In arguing for declaratory judgment of non-infringement, Yout argues that it’s not violating DMCA 1201 (the problematic anti-circumvention provisions) because YouTube doesn’t really employ any technological protection measures that Yout has to circumvent. The judge disagrees, basically saying that even though it’s easy to download videos from YouTube, it still takes steps and is not just a feature that YouTube provides.

The steps outlined constitute an extraordinary use of the YouTube platform, which is self-evident from the fact that the steps access downloadable files through a side door, the Developer Tools menu, and that users must obtain instructions hosted on non-YouTube platforms to explain how to access the file storage location and their files. As explained in the previous section, the ordinary YouTube player page provides no download button and appears to direct users to stream content. I reasonably infer, then, that an ordinary user is not accessing downloadable files in the ordinary course.

That alone is basically an attack on the nature of the open internet. There are tons of features that original websites don’t provide, but which can be easily added to any website via add-ons, extensions, or just a bit of simple programs. But, the judge here is basically saying that not providing a feature in the form of a button directly means that there’s a technological protection measure, and bypassing it could be seen as infringing.

Yikes!

Of course, part of DMCA 1201 is not just having a technological protection measure in place, but an effective one. Here, it seems like there’s an argument that it’s not a strong one. It is not at all a strong protection measure, because basically the only protection measure is “not including a download button.” But, the court sees it otherwise. Yout points out that YouTube makes basically no effort to block anyone from downloading videos, showing that it doesn’t encrypt the files, and the court responds that it doesn’t need to encrypt the files, because other technological protections exist, like passwords and validation keys. But, uh, YouTube doesn’t use either of those either. So the whole thing is weird.

As I have already explained, the definition of “circumvent a technological measure” in the DMCA indicates that scrambling and encryption are prima facie examples of technological measures, but it does not follow that scrambling and encryption constitute an exhaustive list. Courts in the Second Circuit and beyond have held that a wide range of technological measures not expressly incorporated in statute are “effective,” including password protection and validation keys.

So again, the impression we’re left with is the idea that if a website doesn’t directly expose a feature, any third party service that provides that feature may be circumventing a TPM and violating DMCA 1201? That can’t be the way the law works.

Here, the court then says (and I only wish I were kidding) that modifying a URL is bypassing a TPM. Let me repeat that: modifying a URL can be infringing circumvention under 1201. That’s… ridiculous.

Moreover, Yout’s technology clearly “bypasses” YouTube’s technological measures because it affirmatively acts to “modify[]” the Request URL (a.k.a. signature value), causing an end user to access content that is otherwise unavailable. … As explained, without modifying the signature value, there is no access to the allegedly freelyavailable downloadable files. Accordingly, I cannot agree with Yout that there is “nothing to circumvent.”

 

Then, as Professor Eric Goldman notes, the judge dismisses the 512(f) claims by saying that 512(f) doesn’t apply to DMCA 1201 claims. As you hopefully remember, 512(f) is the part of the DMCA that is supposed to punish copyright holders for sending false notices. In theory. In practice, courts have basically said that as long as the sender believes the notice is legit, it’s legit, and therefore there is basically never any punishment for sending false notices.

Saying that 512(f) only applies to 512 takedown notices, and not 1201 takedown notices is just yet another example of the inherent one-sidedness of the DMCA. For years, we’ve pointed out how ridiculous 1201 is, in which merely advertising tools that could be used to circumvent a technical protection measure is considered copyright infringement in and of itself — even if there’s no actual underlying infringement. Given how expansive 1201 is in favor of copyright holders, you’d think it only makes sense to say that bogus notices should face whatever tiny penalty might be available under 512(f), but the judge here says “nope.” As Goldman highlights, this will just encourage people to send takedowns where they don’t directly cite 512, knowing that it will protect them from 512(f) responses.

One other oddity that Goldman also highlights: most of the time if we’re thinking about 1201 circumvention, we’re talking about the copyright holder themselves getting upset that someone is routing around the technical barriers that they put up. But this case is different. YouTube created the technical barriers (I mean, it didn’t actually, but that’s what the court is saying it did), but YouTube is not a party to the lawsuit.

So… that raises a fairly disturbing question. Could the RIAA (or any copyright holder) sue someone for a 1201 violation for getting around someone else’s technical protection measures? Because… that would be weird. But parts of this decision suggest that it’s exactly what the judge envisions.

Yes, some may argue that this tool is somehow “bad” and shouldn’t be allowed. I disagree, but I understand where the argument comes from. But, even if you believe that, it seems like a ruling like this could still lead to all sorts of damage for various third party tools and services. The internet, and the World Wide Web were built to be module. It’s quite common for third party services to build tools and overlays and extensions and whatnot to add features to certain websites.

It seems crazy that this ruling seems to suggest that might violate copyright law.

Source: There Are All Sorts Of Problems With Ruling That YouTube Ripping Tool May Violate Copyright Law | Techdirt

The biggest problem is that if you don’t download the video to your device, you can’t actually watch it, so YouTube is designed to allow you to download the video.

Book Publishing Giant Wiley Pulls Nearly 1400 Ebook Titles From GW Library Forcing Students To Buy Them Instead

[…]

George Washington University libraries have put out an alert to students and faculty that Wiley, one of the largest textbook publishers, has now removed 1,379 textbook titles that the library can lend out. They won’t even let the library purchase a license to lend out the ebooks. They will only let students buy the books.

Wiley will no longer offer electronic versions of these titles in the academic library market for license or purchase. To gain access to these titles, students will have to purchase access from vendors that license electronic textbooks directly to students, such as VitalSource, or purchase print copies. At most, GW Libraries can acquire physical copies for course reserve, which severely reduces the previous level of access for all students in a course.

This situation highlights how the behavior of large commercial publishers poses a serious obstacle to textbook affordability. In this case, Wiley seems to have targeted for removal those titles in a shared subscription package that received high usage. By withdrawing those electronic editions from the academic library market altogether, Wiley has effectively ensured that, when those titles are selected as course textbooks, students will bear the financial burden, and that libraries cannot adequately provide for the needs of students and faculty by providing shared electronic access. 

For years now, we’ve noted that if libraries didn’t already exist, you know that the publishers would scream loudly that they were piracy, and almost certainly block libraries from coming into existence. Of course, since we first noted that, the publishers seem to think they can and should just kill off libraries. They’ve repeatedly jacked up the prices on ebooks for libraries, making them significantly more expensive to libraries than print books, and putting ridiculous limitations on them. That is, when they even allow them to be lent out at all.

They’ve also sued the Internet Archive for daring to lend out ebooks of books that the Archive had in its possession.

And now they’re pulling stunts like this with academic libraries?

And, really, this is yet another weaponization of copyright. If it wasn’t an ebook, the libraries could just purchase copies of the physical book on the open market, and then lend it out. That’s what the first sale right enables. But the legacy copyright players made sure that the first sale right did not exist in the digital space, and now we get situations like this, where they get to dictate the terms over whether or not a library (an academic one at that) can even lend out a book.

This is disgusting behavior and people should call out Wiley for its decision here.

Source: Book Publishing Giant Pulls Nearly 1400 Ebook Titles From GW Library; Forcing Students To Buy Them Instead | Techdirt

Why Reddit Is Losing It Over Samsung’s New Privacy Policy – it’s an incredible data grab

Samsung recently updated it privacy policy for all users with a Samsung account, effective Oct. 1. One Redditor read the policy, did not like what they saw, and shared it to r/android, highlighting what they consider to be the doc’s worst policy points. The thread blew up, with Android users aplenty decrying Samsung’s new policy. But why is everyone so pissed off, and is any of it worth worrying about? Let’s explore.

Samsung’s privacy policy is a bit creepy

From the jump, the new policy doesn’t look good. In fact, it appears downright invasive. There are the standard data giveaways we’ve come to expect: When you create a Samsung account, you must give over personal information like your name, age, address, email address, gender, etc. Par for the course.

However, Samsung also notes it will collect data such as credit card information, usernames and passwords for third-party services, photos, contacts, text logs, recordings of your voice generated during voice commands, and location data, including precise location data as well as nearby wifi access points and cell towers. It might come as a surprise to know a company like Samsung can keep your chat transcripts, contacts, and voice recordings, but there’s precedent: Apple found itself in hot water when third-party contractors revealed they were able to listen in on audio recordings from Siri requests, which included all kinds of personal conversations and activities.

Samsung also tracks your general activity via cookies, pixels, web beacons, and other means. The company claims this tracking is done for a variety of reasons, including remembering your information to avoid you having to retype it in the future, and to better learn how you use their services. To achieve these goals, it collects just about everything there is to know about your device, including your IP address, device model, device settings, websites you visit, and apps you download, among many others. The policy does remind you to adjust your privacy settings if you’re uncomfortable with this default tracking (as if anyone wouldn’t be).

The company says it has a lot of uses for this information, including ad delivery, communication with customers, enhancing their services, improving their business, identifying and preventing fraud and criminal activity, and to comply with “applicable legal requirements.” Further, they reserve the right to share your information with “subsidiaries and affiliates,” “business partners and third-parties,” as well as law enforcement and other authorities. In short, depending on the circumstances, your Samsung data could end up in the hands of a lot of third parties.

But that’s not everything. Under the “Notice to California Residents” section is where the juiciest policies emerge. While most of the info is the same, if broken down in a different way, there is one additional note about data Samsung collects: biometric information. The company doesn’t elaborate, but this entry implies Samsung obtains data from face and fingerprint scans, when traditionally, this information is stored on-device. Apple, for example, doesn’t have access to your face scans on your iPhone. Obviously, this is potentially concerning.

In addition, the California Residents section also discusses what data Samsung sells to third parties. Samsung says in the 12 months before this new policy went into effect, it may have sold data of yours, including device identifiers (cookies, pixel tags, etc.), purchase histories or tendencies, and network activity, including how you interact with websites.

[…]

If you’re eyeing your Galaxy Z Flip with newfound skepticism, I don’t blame you. Unfortunately, if you dive into the privacy policies for most of your other tech, you’ll be similarly disturbed. Samsung is hardly the only collecting, sharing, and selling your data.

One Redditor does make a great point about the redundancy of privacy violations here. Sure, Google might have similar policies in place, but since Samsung runs Android, you’re really dealing with two meddling companies instead, not one:

Considering the prices for their hardware, the un-removable bloatware that is generally inferior to the Google software, and anti-Right-to-Repair campaigns (and reflections in their hardware), I see no reason to buy their phones over Google’s. I’ll have just one company with intrusive insight into my personal device at a time, thank you.

[…]

Source: Why Reddit Is Losing It Over Samsung’s New Privacy Policy

The Onion defends right to parody in very real supreme court brief supporting local satirist vs Police who were made fun of

The Onion, the long-running satirical publication, has filed a very real legal document with the US supreme court, urging it to take on a case centered on the right to parody. And in order to make a serious legal point, the filing does what the Onion does best, offering a big helping of total nonsense.

Claiming global Onion readership of 4.3 trillion, the filing describes the publication as “the single most powerful and influential organization in human history”. It’s the source of 350,000 jobs at its offices and “manual labor camps”, and it “owns and operates the majority of the world’s transoceanic shipping lanes, stands on the nation’s leading edge on matters of deforestation and strip mining, and proudly conducts tests on millions of animals daily”.

With such power, why does the Onion feel the need to weigh in on a mundane court case? “To protect its continued ability to create fiction that may ultimately merge into reality,” the filing asserts. “The Onion’s writers also have a self-serving interest in preventing political authorities from imprisoning humorists. This brief is submitted in the interest of at least mitigating their future punishment.”

The outlet is concerned about the outcome of a case it describes in a headline: “Ohio Police Officers Arrest, Prosecute Man Who Made Fun of Them on Facebook”. It sounds like an Onion headline, the filing points out, but it’s not.

A screenshot of the Onion website shows several different stories all with the same headline: 'No way to prevent this' says only nation where this regularly happens.
‘No way to prevent this’: why the Onion’s gun violence headline is so devastating
Read more

In 2016, Anthony Novak was arrested for making a Facebook page that parodied the local police page. He was charged with disrupting a public service but was acquitted. The next year, he sued the department, arguing it was retaliating against him for using his right to free speech, as Cleveland.com reported.

In May, a US appeals court backed the police in the case, a finding Novak’s lawyer said “sets dangerous precedent undermining free speech”. Last week, Novak appealed against the case to the supreme court, leading to the Onion’s filing – what’s known as an amicus brief, a filing by an outside party seeking to influence the court.

In one of its less amusing sections, the brief argues that the appeals court ruling “imperils an ancient form of discourse. The court’s decision suggests that parodists are in the clear only if they pop the balloon in advance by warning their audience that their parody is not true. But some forms of comedy don’t work unless the comedian is able to tell the joke with a straight face.”

The filing highlights the history of parody and its social function: “It adopts a particular form in order to critique it from within”. To demonstrate, the Onion cites one of its own greatest headlines: “Supreme court rules supreme court rules”.

The document serves as a rare glimpse behind the comedy curtain – an explanation of how jokes work – even as it serves as a more traditional legal document, pointing to relevant court cases and using words like “dispositive”.

The city of Parma has until 28 October to provide a response in a case that would be heard next year if the high court opts to consider it.

In the meantime, “the Onion cannot stand idly by in the face of a ruling that threatens to disembowel a form of rhetoric that has existed for millennia, that is particularly potent in the realm of political debate, and that, purely incidentally, forms the basis of The Onion’s writers’ paychecks”.

Source: The Onion defends right to parody in very real supreme court brief supporting local satirist | US supreme court | The Guardian

Publishers Lose Their Shit After Authors Push Back On Their Attack On Libraries, start fake newsing

On Friday, we wrote about hundreds of authors signing a letter calling out the big publishers’ attacks on libraries (in many, many different ways). The publishers pretend to represent the best interests of the authors, but history has shown over and over again that they do not. They represent themselves, and use the names of authors they exploit to claim the moral high ground they do not hold.

It’s no surprise, then, that the publishers absolutely fucking lost their shit after the letter came out. The Association of American Publishers put out a statement falsely claiming that the letter, put out by Fight for the Future (FftF), and signed by tons of authors from the super famous to the less well known, was actually “disinformation in the Internet Archive case.” And, look, if you’re at the point you’re blaming the Internet Archive for something another group actually did, you know you’ve lost, and you’re just lashing out.

Perhaps much more telling is that the Authors Guild actually put out an even more aggressive statement against Fight for the Future. Now, as best selling author Barry Eisler (who signed onto Fight for the Future’s letter) wrote write here on Techdirt years ago, it’s been clear for a while that the Authors Guild is not actually representing the best interests of authors. It has long been a front group for the publishers themselves.

The Authors Guild’s response to the FftF letter simply confirms this.

First, it claims that authors were misled into signing the letter by an earlier, different draft of the letter. This is simply false. The Authors Guild is making shit up because they just can’t believe that maybe authors actually support this.

They do name one author, Daniel Handler (aka Lemony Snicket), who had signed on, but removed his name before the letter was even published. But… I’m guessing the real reason that probably happened was that the publishers (who learned about the letter before it was published as proved by this email that was sent around prior to the release) FLIPPED OUT when they saw Handler’s name was on the letter. That’s because in their lawsuit against the Internet Archive’s open library project, they rely heavily on the claim that Lemony Snicket’s books are available there.

It seems reasonable to speculate that the publishers saw his name was on the letter, realized it undermined basically the crux of their case, and came down like a ton of bricks on him to pressure him into un-signing the letter. That story, at the very least, makes more sense than someone like Handler somehow being “tricked” into signing a letter that very clearly says what it says.

The Authors Guild’s other claims are equally sketchy.

The lawsuit against Open Library is completely unrelated to the traditional rights of libraries to own and preserve books. It is about Open Library’s attempt to stretch fair use to the breaking point – where any website that calls itself a library could scan books and make them publicly available – a practice engaged in by ebook pirates, not libraries.

This completely misrepresents what the Open Library does, and its direct parallel to any physical library, in that it buys a copy of a book and then can lend out that copy of the book. The courts have already established that scanning books is legal fair use — thanks to a series of cases the Authors Guild brought and lost (embarrassingly so) — and the Open Library then only allows a one-to-one lending of ebooks to actual books. It is functionally equivalent to any other library in any way.

And this is actually important, living at a time when these very same publishers are trying to use twisted interpretations of copyright law, to insist that they can limit how libraries buy and lend ebooks in ways that simply are not possible under the law with regular books.

Also, there’s this bit of nonsense:

The lawsuit is being brought only against IA’s Open Library; it will not impact in any way the Wayback Machine or any other services IA offers.

This is laughable. The lawsuit is asking for millions and millions of dollars from the Internet Archive. If it loses the case, there’s a very strong likelihood that the entire Internet Archive will need to shut down, because it will be unable to pay. Even if the Internet Archive could survive, the idea that this non-profit would be forced to fork over tens of millions of dollars wouldn’t have any impact on other parts of its offerings is laughable.

Fight for the Future has hit back at these accusations:

As expected, corporate publishing industry lobbyists have responded by attempting to undermine the demands of these authors by circulating false and condescending talking points, a frequent tactic lobbyists use to divert attention from the principled actions of activists.

The statement from the Authors Guild specifically asserts, without evidence, that “multiple authors” who signed this letter feel they were “misled”. This assertion is false and we challenge these lobbyists to either provide evidence for their claim or retract it. 

It’s repugnant for industry lobbying associations who claim to represent authors to dismiss the activism of author-signatories like Neil Gaiman, Chuck Wendig, Naomi Klein, Robert McNamee, Baratunde Thurston, Lawrence Lessig, Cory Doctorow, Annalee Newitz, and Douglas Rushkoff, or claim that these authors were somehow misled into signing a brief and clear letter issuing specific demands for the good of all libraries. Corporate publishing lobbyists are free to disagree with the views stated in our letter, but it’s unacceptable for them to make false claims about our organization or the authors who signed.

They also highlight how many authors who signed onto the letter talked about how proud they are that their books are available at the Internet Archive, which is not at all what you would expect if the Open Library was actually about “piracy.”

Author Elizabeth Kate Switaj said when signing: “My most recently published book is on the Internet Archive—and that delights me.”  Dan Gillmor said: “Big Publishing would outlaw public libraries if it could—or at least make it impossible for libraries to buy and lend books as they have traditionally done, to enormous public benefit—and its campaign against the Internet Archive is a step toward that goal.” Sasha Costanza-Cook called publisher’s actions against the Internet Archive “absolutely shameful” and Laura Gibbs said “it’s the library I use most, and I am proud to see my books there.”

They, also, rightly push back on the totally nonsense claims that FftF is “not independent” and is somehow a front for the Internet Archive. I know people at both organizations, and this assertion is laughable. The two organizations agree on many things, but are absolutely and totally independent. This is nothing but a smear from the Authors Guild which can’t even fathom that most authors don’t like the publishers or the way the Authors Guild has become an organization that doesn’t look out for the best interests of all authors, but rather just a few of the biggest names.

Source: Publishers Lose Their Shit After Authors Push Back On Their Attack On Libraries | Techdirt

EA Announces New Anti-Cheat Tech That Operates At The Kernel Level ie takes over your PC, can read and write everything on it

It seems anti-cheat technology is the new DRM. By that I mean that, with the gaming industry diving headfirst into the competitive online gaming scene, the concern over piracy has shifted into a concern over cheating making those online games less attractive to gamers. And because the anti-cheat tech that companies are using is starting to make the gaming public every bit as itchy as it was over DRM.

Consider that Denuvo’s own anti-cheat tech has already started following its DRM path in getting ripped out of games shortly after release after one game got review-bombed over just how intrusive it was. And then consider that Valve had to reassure gamers that its own anti-cheat technology wasn’t watching user’s browsing habits, given that the VAC platform was designed to sniff out kernel-level cheats. One notable Reddit thread had gamers comparing Valve to Electronic Arts as a result.

Which makes it perhaps more interesting that EA recently announced new anti-cheat technology that, yup, operates at the kernel level.

The new kernel-level EA Anti-Cheat (EAAC) tools will roll out with the PC version of FIFA 23 this month, EA announced, and will eventually be added to all of its multiplayer games (including those with ranked online leaderboards). But strictly single-player titles “may implement other anti-cheat technology, such as user-mode protections, or even forgo leveraging anti-cheat technology altogether,” EA Senior Director of Game Security & Anti-Cheat Elise Murphy wrote in a Tuesday blog post.

Unlike anti-cheat methods operating in an OS’s normal “user mode,” kernel-level anti-cheat tools provide a low-level, system-wide view of how cheat tools might mess with a game’s memory or code from the outside. That allows anti-cheat developers to detect a wider variety of cheating threats, as Murphy explained in an extensive FAQ.

The concern from gamers came quickly. You have to keep in mind that none of this occurs without the context of history. There’s a reason why, even today, a good chunk of the gaming public knows all about the Sony rootkit fiasco. They’re aware of the claims that DRM like Denuvo’s affects PC performance. They’ve heard plenty of horror stories about gaming companies, or other software companies, coopting security tools like this in order to slurp up all kinds of PII or user activity for non-gaming purposes. Hell, one of the more prolific antivirus companies recently announced a plan to also use customer machines for crypto-mining.

So it’s in that context that hearing that EA would please like to access the most base-level and sensitive parts of a customer’s PC just to make sure that fewer people can cheat online in FIFA.

Privacy aside, some users might also worry that a new kernel-level driver could destabilize or hamper their system (à la Sony’s infamous music DRM rootkits). But Murphy promised that EAAC is designed to be “as performant and lightweight as possible. EAAC will have negligible impact on your gameplay.”

Kernel-level tools can also provide an appealing new attack surface for low-level security exploits on a user’s system. To account for that, Murphy said her team has “worked with independent, 3rd-party security and privacy assessors to validate EAAC does not degrade the security posture of your PC and to ensure strict data privacy boundaries.” She also promised daily testing and constant report monitoring to address any potential issues that pop up.

Gamers have heard these promises before. Those promises have been broken before. Chiding the public for being concerned at granting kernel-level access to their machines just to keep online gaming less ridden with cheaters is a tough sell.

Source: EA Announces New Anti-Cheat Tech That Operates At The Kernel Level | Techdirt

Blizzard really really wants your phone number to play its games – personal data grab and security risk

When Overwatch 2 replaces the original Overwatch on Oct. 4, players will be required to link a phone number to their Battle.net accounts. If you don’t, you won’t be able to play Overwatch 2 — even if you’ve already purchased Overwatch. The same two-factor step, called SMS Protect, will also be used on all Call of Duty: Modern Warfare 2 accounts when that game launches, and new Call of Duty: Modern Warfare accounts.

Blizzard Entertainment announced SMS Protect and other safety measures ahead of Overwatch 2’s release. Blizzard said it implemented these controls because it wanted to “protect the integrity of gameplay and promote positive behavior in Overwatch 2.”

[…]

SMS Protect is a security feature that has two purposes: to keep players accountable for what Blizzard calls “disruptive behavior,” and to protect accounts if they’re hacked. It requires all Overwatch 2 players to attach a unique phone number to their account. Blizzard said SMS Protect will target cheaters and harassers; if an account is banned, it’ll be harder for them to return to Overwatch 2. You can’t just enter any old phone number — you actually have to have access to a phone receiving texts to that number to get into your account.

[…]

Blizzard said these phone notifications will be used to approve password resets — meaning someone else won’t be able to change your password without the notification code it’ll send to your mobile phone. Blizzard said it will also send you a text message if your account is locked out after a “a suspicious login attempt,” or if your password or security features are changed.

Source: Overwatch 2 SMS Protect: What is it? Why does Blizzard require my phone number? – Polygon

So this is a piece of ‘real’ information you have to give them – but what if you move country and mobile phone? what if you lose your mobile? what if they get hacked (again) and take your number? It’s either something that does get changed or is very hard to change. It shows you that basically Blizzard sees your data as something they can grab onto for free – you are  their product. Even though the games are technically free to play, in practice they make a killing off the items you buy ingame in order to be cool

They will probably get away with it though, just as they got away with installing spyware on your PC or when you attend their events under pretty flimsy pretenses.

Neil Gaiman, Cory Doctorow And Other Authors Publish Letter Protesting Lawsuit Against Internet Library

A group of authors and other creative professionals are lending their names to an open letter protesting publishers’ lawsuit against the Internet Archive Library, characterizing it as one of a number of efforts to curb libraries’ lending of ebooks.

Authors including Neil Gaiman, Naomi Klein, and Cory Doctorow lent their names to the letter, which was organized by the public interest group Fight for the Future.

“Libraries are a fundamental collective good. We, the undersigned authors, are disheartened by the recent attacks against libraries being made in our name by trade associations such as the American Association of Publishers and the Publishers Association: undermining the traditional rights of libraries to own and preserve books, intimidating libraries with lawsuits, and smearing librarians,” the letter states.

A group of publishers sued the Internet Archive in 2020, claiming that its open library violates copyright by producing “mirror image copies of millions of unaltered in-copyright works for which it has no rights” and then distributes them “in their entirety for reading purposes to the public for free, including voluminous numbers of books that are commercially available.” They also contend that the archive’s scanning undercuts the market for e-books.

The Internet Archive says that its lending of the scanned books is akin to a traditional library. In its response to the publishers’ lawsuit, it warns of the ramifications of the litigation and claims that publishers “would like to force libraries and their patrons into a world in which books can only be accessed, never owned, and in which availability is subject to the rightsholders’ whim.”

The letter also calls for enshrining “the right of libraries to permanently own and preserve books, and to purchase these permanent copies on reasonable terms, regardless of format,” and condemns the characterization of library advocates as “mouthpieces” for big tech.

“We fear a future where libraries are reduced to a sort of Netflix or Spotify for books, from which publishers demand exorbitant licensing fees in perpetuity while unaccountable vendors force the spread of disinformation and hate for profit,” the letter states.

The litigation is in the summary judgment stage in U.S. District Court in New York.

Hachette Book Group, HarperCollins Publishers, John Wiley & Sons Inc and Penguin Random House are plaintiffs in the lawsuit.

[…]

Source: Authors Publish Letter Protesting Lawsuit Against Internet Library – Deadline

Open internet at stake in UN ITU secretary-general election

[…]  this year’s event has become a geopolitical football – and possibly a turning point for internet governance – thanks to the two candidates running in an election for the position of ITU secretary-general.

[…]

The USA has put forward Doreen Bogdan-Martin for the gig.

[…]

Russia has nominated Rashid Ismailov for the job. A former deputy minister at Russia’s Ministry of Telecom and Mass Communication, Ismailov has also worked for Huawei.

Speaking of Huawei, in 2019 it and China Mobile, China Unicom, and China’s Ministry of Industry and Information Technology (MIIT), did something unexpected: submit a proposal to the ITU for a standard called New IP to supersede Internet Protocol. The entities behind New IP claimed it is needed because existing protocols don’t include sufficient quality-of-service guarantees, so netizens will struggle to handle latency-sensitive future applications, and also because current standards lack intrinsic security.

New IP is controversial for two reasons.

One is that the ITU does not oversee IP (as in, Internet Protocol, the standard that helps glue our modern communications together). That’s the IETF’s job. The IETF is a multi-stakeholder organization that accepts ideas from anywhere – the QUIC protocol that’s potentially on the way to replacing TCP originated at Google but was developed into a standard by the IETF. The ITU is a United Nations body so represents nation-states.

The other is that New IP proposes a Many Networks – or ManyNets – approach to global internetworking, with distinct, individual networks allowed to set their own rules on access to systems and content. Some of the rules envisioned under New IP could require individuals to register for network access, and allow central control – even shutdowns – of traffic on a national network.

New IP is of interest to those who like the idea of a “sovereign internet” such as China’s, on which the government conducts pervasive surveillance and extensive censorship.

China argues it can do as it pleases within its borders. But New IP has the potential to make some of the controls China uses on its local internet part of global protocols.

Another nation increasingly interested in a sovereign internet is Russia, which was not particularly tolerant of free speech before its illegal invasion of Ukraine and has since implemented sweeping censorship across its patch of the internet.

The possibility of Rashid Ismailov being elected ITU boss, and potentially driving adoption of censorship-enabling New IP around the world, therefore has plenty of people worried – not least because in 2021 Russia and China issued a joint statement that called for “all States [to] have equal rights to participate in global-network governance, increasing their role in this process and preserving the sovereign right of States to regulate the national segment of the Internet.”

[…]

In an email to The Register sent in a personal capacity, Lars Eggert, chair of the IETF, stated: “I personally would wish for the ITU to reaffirm its commitment to the consensus-based multi-stakeholder model that has been the foundation for the success of the Internet, and is at the heart of the open standards development model the IETF and other standards developing organizations follow when improving the overall Internet architecture and its protocol components.”

He added, “I personally would like to see an ITU leadership emerge that strengthens the ITU’s commitment to the above-mentioned approach to Internet evolution.”

Eggert pointed out an official IETF response to New IP that criticizes its potential for central control and argues that existing IETF processes and projects already address the issues the China-derived proposal seeks to address.

The Internet Society, the non-profit that promotes open internet development, is also concerned about the proceedings at the ITU event.

“Plenipotentiary-22 could be a turning point for the Internet,” the organization stated in a mail to The Register. “The multi-stakeholder Internet governance model and principles are being called into question by some ITU Member States and there are multilateral processes aiming to position governments as the main decision-makers regarding Internet governance.”

The society told The Register: “Internet technical standards must remain within the domain of the appropriate standards bodies, such as the IETF, where work that intends to update, amend, or develop Internet technical standards must be presented.”

[…]

Source: Open internet at stake in UN ITU secretary-general election

Subreddit Discriminates Against Anyone Who Doesn’t Call Texas Governor Greg Abbott ‘A Little Piss Baby’ To Highlight Absurdity Of Content Moderation Law Designed for White Supremacists

Last year, I tried to create a “test suite” of websites that any new internet regulation ought to be “tested” against. The idea was that regulators were so obsessively focused on the biggest of the big guys (i.e., Google, Meta) that they never bothered to realize how it might impact other decently large websites that involved totally different setups and processes. For example, it’s often quite impossible to figure out how a regulation about Google and Facebook content moderation would work on sites like Wikipedia, Github, Discord, or Reddit.

Last week, we called out that Texas’s HB 20 social media content moderation law almost certainly applies to sites like Wikipedia and Reddit, yet I couldn’t see any fathomable way in which those sites could comply, given that so much of the moderation on each is driven by users rather than the company. It’s been funny watching supporters of the law try to insist that this is somehow easy for Wikipedia (probably the most transparent larger site on the internet) to comply with by being “more transparent and open access.”

If you somehow can’t see that tweet or screenshot, it’s a Trumpist defender of the law responding to someone asking how Wikipedia can comply with the law, saying:

Wikipedia would have to offer more transparent and open access to their platform, which would allow truth to flourish over propaganda there? Is that what you’re worried about, or what is it?

To which a reasonably perplexed Wikipedia founder Jimmy Wales rightly responds:

What on earth are you talking about? It’s like you are writing from a different dimension.

Anyway… it seems some folks on Reddit are realizing the absurdity of the law and trying to demonstrate it in the most internety way possible. Michael Vario alerts us that the r/PoliticalHumor subreddit is “messing with Texas” by requiring every comment to include the phrase “Greg Abbott is a little piss baby” or be deleted in a fit of content moderation discrimination in violation of the HB20 law against social media “censorship.”

Until further notice, all comments posted to this subreddit must contain the phrase “Greg Abbott is a little piss baby”

There is a reason we’re doing this, the state of Texas has passed H.B. 20Full text here, which is a ridiculous attempt to control social media. Just this week, an appeals court reinstated the law after a different court had declared it unconstitutional. Vox has a pretty easy to understand writeup, but the crux of the matter is, the law attempts to force social media companies to host content they do not want to host. The law also requires moderators to not censor any specific point of view, and the language is so vague that you must allow discussion about human cannibalization if you have users saying cannibalization is wrong. Obviously, there are all sorts of real world problems with it, the obvious ones being forced to host white nationalist ideology or insurrectionist ideation. At the risk of editorializing, that might be a feature, not a bug for them.

Anyway, Reddit falls into a weird category with this law. The actual employees of the company Reddit do, maybe, one percent of the moderation on the site. The rest is handled by disgusting jannies volunteer moderators, who Reddit has made quite clear over the years, aren’t agents of Reddit (mainly so they don’t lose millions of dollars every time a mod approves something vaguely related to Disney and violates their copyright). It’s unclear whether we count as users or moderators in relation to this law, and none of us live in Texas anyway. They can come after all 43 dollars in my bank account if they really want to, but Virginia has no obligation to extradite or anything.

We realized what a ripe situation this is, so we’re going to flagrantly break this law. Partially to raise awareness of the bullshit of it all, but mainly because we find it funny. Also, we like this Constitution thing. Seems like it has some good ideas.

They also include a link to the page where people can file a complaint with the Texas Attorney General, Ken Paxton, asking him to investigate whether the deletion of any comments that don’t claim that his boss, Governor Greg Abbott, is “a little piss baby” is viewpoint discrimination in violation of the law.

Source: Subreddit Discriminates Against Anyone Who Doesn’t Call Texas Governor Greg Abbott ‘A Little Piss Baby’ To Highlight Absurdity Of Content Moderation Law | Techdirt

This Controversial Artist Matches Influencer Photoshoots With Surveillance Footage

It’s an increasingly common sight on vacation, particularly in tourist destinations: An influencer sets up in front of a popular local landmark, sometimes even using props (coffee, beer, pets) or changing outfits, as a photographer or self-timed camera snaps away. Others are milling around, sometimes watching. But often, unbeknownst to everyone involved, another device is also recording the scene: a surveillance camera.

Belgian artist Dries Depoorter is exploring this dynamic in his controversial new online exhibit, The Followers, which he unveiled last week. The art project places static Instagram images side-by-side with video from surveillance cameras, which recorded footage of the photoshoot in question.

On its face, The Followers is an attempt, like many other studies, art projects and documentaries in recent years, to expose the staged, often unattainable ideals shown in many Instagram and influencer photos posted online. But The Followers also tells a darker story: one of increasingly worrisome privacy concerns amid an ever-growing network of surveillance technology in public spaces. And the project, as well as the techniques used to create it, has sparked both ethical and legal controversy.

To make The Followers, Depoorter started with EarthCam, a network of publicly accessible webcams around the world, to record a month’s worth of footage in tourist attractions like New York City’s Times Square and Dublin’s Temple Bar Pub. Then he enlisted an artificial intelligence (A.I.) bot, which scraped public Instagram photos taken in those locations, and facial-recognition software, which paired the Instagram images with the real-time surveillance footage.

Depoorter calls himself a “surveillance artist,” and this isn’t his first project using open-source webcam footage or A.I. Last year, for a project called The Flemish Scrollers, he paired livestream video of Belgian government proceedings with an A.I. bot he built to determine how often lawmakers were scrolling on their phones during official meetings.

“The idea [for The Followers] popped in my head when I watched an open camera and someone was taking pictures for like 30 minutes,” Depoorter tells Vice’s Samantha Cole. He wondered if he’d be able to find that person on Instagram.

[…]

The Followers has also hit some legal snags since going live. The project was originally up on YouTube, but EarthCam filed a copyright claim, and the piece has since been taken down. Depoorter tells Hyperallergic that he’s attempting to resolve the claim and get the videos re-uploaded. (The project is still available to view on the official website and the artist’s Twitter).

Depoorter hasn’t replied directly to much of the criticism, but he tells Input he wants the art to speak for itself. “I know which questions it raises, this kind of project,” he says. “But I don’t answer the question itself. I don’t want to put a lesson into the world. I just want to show the dangers of new technologies.”

Source: This Controversial Artist Matches Influencer Photos With Surveillance Footage | Smart News| Smithsonian Magazine

Fitbit accounts are being replaced by Google accounts

New Fitbit users will be required to sign-up with a Google account, from next year, while it also appears one will be needed to access some of the new features in years to come.

Google has been slowly integrating Fitbit into the fold since buying the company back in November 2019. Indeed, the latest products are now known as “Fitbit by Google”. However, as it currently stands, device owners have been able to maintain separate accounts for Google and Fitbit accounts.

Google has now revealed it is bringing Google Accounts to Fitbit in 2023, enabling a single login for both services. From that point on, all new sign ups will be through Google. Fitbit accounts will only be supported until 2025.

From that point on, a Google account will be the only way to go. To aid the transition, once the introduction of Google accounts begins, it’ll be possible to move existing devices over while maintaining all of the recorded data.

[…]

“We’ll be transparent with our customers about the timeline for ending Fitbit accounts through notices within the Fitbit app, by email, and in help articles.”

Whether that will be enough to assuage the concerns of the Fitbit user base – who didn’t have a say on whether Google bought their personal fitness data – remains to be seen.

Source: Fitbit accounts are being replaced by Google accounts | Trusted Reviews

So wonderful cloud – first of all, why should this data go to the cloud anyway? Second, you thought you were giving it to one provider but it turns out you’re giving it to another with no opt-out other than trashing an expensive piece of hardware.

Meta ordered to pay $175 million in patent infringement case

A federal judge in Texas has ordered the company to pay Voxer, the developer of app called Walkie Talkie, nearly $175 million as an ongoing royalty. Voxer accused Meta of infringing its patents and incorporating that tech in Instagram Live and Facebook Live.

In 2006, Tom Katis, the founder of Voxer, started working on a way to resolve communications problems he faced while serving in the US Army in Afghanistan, as TechCrunch notes. Katis and his team developed tech that allows for live voice and video transmissions, which led to Voxer debuting the Walkie Talkie app in 2011.

According to the lawsuit, soon after Voxer released the app, Meta (then known as Facebook) approached the company about a collaboration. Voxer is said to have revealed its proprietary technology as well as its patent portfolio to Meta, but the two sides didn’t reach an agreement. Voxer claims that even though Meta didn’t have live video or voice services back then, it identified the Walkie Talkie developer as a competitor and shut down access to Facebook features such as the “Find Friends” tool.

Meta debuted Facebook Live in 2015. Katis claims to have had a chance meeting with a Facebook Live product manager in early 2016 to discuss the alleged infringements of Voxer’s patents in that product, but Meta declined to reach a deal with the company. The latter released Instagram Live later that year. “Both products incorporate Voxer’s technologies and infringe its patents,” Voxer claimed in the lawsuit.

[…]

Source: Meta ordered to pay $175 million in patent infringement case | Engadget

US Military Bought Mass Monitoring Tool That Includes Internet Browsing, Email Data, Cookies from guy who helps run TOR

Multiple branches of the U.S. military have bought access to a powerful internet monitoring tool that claims to cover over 90 percent of the world’s internet traffic, and which in some cases provides access to people’s email data, browsing history, and other information such as their sensitive internet cookies, according to contracting data and other documents reviewed by Motherboard.

Additionally, Sen. Ron Wyden says that a whistleblower has contacted his office concerning the alleged warrantless use and purchase of this data by NCIS, a civilian law enforcement agency that’s part of the Navy, after filing a complaint through the official reporting process with the Department of Defense, according to a copy of the letter shared by Wyden’s office with Motherboard.

The material reveals the sale and use of a previously little known monitoring capability that is powered by data purchases from the private sector. The tool, called Augury, is developed by cybersecurity firm Team Cymru and bundles a massive amount of data together and makes it available to government and corporate customers as a paid service. In the private industry, cybersecurity analysts use it for following hackers’ activity or attributing cyberattacks. In the government world, analysts can do the same, but agencies that deal with criminal investigations have also purchased the capability. The military agencies did not describe their use cases for the tool. However, the sale of the tool still highlights how Team Cymru obtains this controversial data and then sells it as a business, something that has alarmed multiple sources in the cybersecurity industry.

“The network data includes data from over 550 collection points worldwide, to include collection points in Europe, the Middle East, North/South America, Africa and Asia, and is updated with at least 100 billion new records each day,” a description of the Augury platform in a U.S. government procurement record reviewed by Motherboard reads. It adds that Augury provides access to “petabytes” of current and historical data.

Motherboard has found that the U.S. Navy, Army, Cyber Command, and the Defense Counterintelligence and Security Agency have collectively paid at least $3.5 million to access Augury. This allows the military to track internet usage using an incredible amount of sensitive information. Motherboard has extensively covered how U.S. agencies gain access to data that in some cases would require a warrant or other legal mechanism by simply purchasing data that is available commercially from private companies. Most often, the sales center around location data harvested from smartphones. The Augury purchases show that this approach of buying access to data also extends to information more directly related to internet usage.

[…]

The Augury platform makes a wide array of different types of internet data available to its users, according to online procurement records. These types of data include packet capture data (PCAP) related to email, remote desktop, and file sharing protocols. PCAP generally refers to a full capture of data, and encompasses very detailed information about network activity. PCAP data includes the request sent from one server to another, and the response from that server too.

[…]

Augury also contains so-called netflow data, which creates a picture of traffic flow and volume across a network. That can include which server communicated with another, which is information that may ordinarily only be available to the server owner themselves or to the internet service provider that is carrying the traffic. That netflow data can be used for following traffic through virtual private networks, and show the server they are ultimately connecting from.

[…]

Team Cymru obtains this netflow data from ISPs; in return, Team Cymru provides the ISPs with threat intelligence. That transfer of data is likely happening without the informed consent of the ISPs’ users. A source familiar with the netflow data previously told Motherboard that “the users almost certainly don’t [know]” their data is being provided to Team Cymru, who then sells access to it.

It is not clear where exactly Team Cymru obtains the PCAP and other more sensitive information, whether that’s from ISPs or another method.

[…]

Beyond his day job as CEO of Team Cymru, Rabbi Rob Thomas also sits on the board of the Tor Project, a privacy focused non-profit that maintains the Tor software. That software is what underpins the Tor anonymity network, a collection of thousands of volunteer-run servers that allow anyone to anonymously browse the internet.

“Just like Tor users, the developers, researchers, and founders who’ve made Tor possible are a diverse group of people. But all of the people who have been involved in Tor are united by a common belief: internet users should have private access to an uncensored web,” the Tor Project’s website reads.

[…]

Source: Revealed: US Military Bought Mass Monitoring Tool That Includes Internet Browsing, Email Data

Meta sued for allegedly secretly tracking iPhone users

Meta was sued on Wednesday for alleged undisclosed tracking and data collection in its Facebook and Instagram apps on Apple iPhones.

The lawsuit [PDF], filed in a US federal district court in San Francisco, claims that the two applications incorporate use their own browser known as a WKWebView that injects JavaScript code to gather data that would otherwise be unavailable if the apps opened links in the default standalone browser designated by iPhone users.

The claim is based on the findings of security researcher Felix Krause, who last month published an analysis of how WKWebView browsers embedded within native applications can be abused to track people and violate privacy expectations.

“When users click on a link within the Facebook app, Meta automatically directs them to the in-app browser it is monitoring instead of the smartphone’s default browser, without telling users that this is happening or they are being tracked,” the complaint says.

“The user information Meta intercepts, monitors and records includes personally identifiable information, private health details, text entries, and other sensitive confidential facts.”

[…]

However, Meta’s use of in-app browsers in its mobile apps predates Apple’s ATT initiative. Apple introduced WKWebView at its 2014 Worldwide Developer Conference as a replacement for its older UIWebView (UIKit) and WebView (AppKit) frameworks. That was in iOS 8. With the arrival of iOS 9, as described at WWDC 2015, there was another option, SFSafariViewController. Presently this is what’s recommended for displaying a website within an app.

And the company’s use of in-app browsers has elicited concern before.

“On top of limited features, WebViews can also be used for effectively conducting intended man-in-the-middle attacks, since the IAB [in-app browser] developer can arbitrarily inject JavaScript code and also intercept network traffic,” wrote Thomas Steiner, a Google developer relations engineer, in a blog post three years ago.

In his post, Steiner emphasizes that he didn’t see anything unusual like a “phoning home” function.

Krause has taken a similar line, noting only the potential for abuse. In a follow-up post, he identified additional data gathering code.

He wrote, “Instagram iOS subscribes to every tap on any button, link, image or other component on external websites rendered inside the Instagram app” and also “subscribes to every time the user selects a UI element (like a text field) on third party websites rendered inside the Instagram app.”

However, “subscribes” simply means that analytics data is accessible within the app, without offering any conclusion about what, if anything, is done with the data. Krause also points out that since 2020, Apple has offered a framework called WKContentWorld that isolates the web environment from scripts. Developers using an in-app browser can implement WKContentWorld in order to make scripts undetectable from the outside, he said.

Whatever Meta is doing internally with its in-app browser, and even given the company’s insistence its injected script validates ATT settings, the plaintiffs suing the company argue there was no disclosure of the process.

“Meta fails to disclose the consequences of browsing, navigating, and communicating with third-party websites from within Facebook’s in-app browser – namely, that doing so overrides their default browser’s privacy settings, which users rely on to block and prevent tracking,” the complaint says. “Similarly, Meta conceals the fact that it injects JavaScript that alters external third-party websites so that it can intercept, track, and record data that it otherwise could not access.”

[…]

Source: Meta sued for allegedly secretly tracking iPhone users • The Register

Study Shows That Copyright Filters Harm Creators Rather Than Help Them

The EU Copyright Directive contains one of the worst ideas in modern copyright: what amounts to a requirement to filter uploads on major sites.  Despite repeated explanations of why this would cause huge harm to both creators and members of the public, EU politicians were taken in by the soothing words of the legislation’s proponents, who even went so far as to deny that upload filters would be required at all.

The malign effects of the EU Copyright Directive have not yet been felt, as national legislatures struggle to implement a law with deep internal contradictions.  However, upload filters are already used on an ad hoc basis, for example YouTube’s Content ID.  There is thus already mounting evidence of the problems with the approach.   A new report, from the Colombian Fundación Karisma, adds to the concerns by providing additional examples of how creators have already suffered from upload filters:

This research found multiple cases of unjustified notifications of supposed violation of copyright directed at content that is either part of the public domain, original content, or instances of judicial overreach of copyright law. The digital producers that are the target of these unjust notifications affirm that the appeal process and counter-notification procedures don’t help them protect their rights. The appeals interface of the different platforms that were taken into account did not help resolve the cases, which leaves digital creators defenseless with no alternative other than what they can obtain from their contacts. This system damages the capacity of these producers to grow, maintain and monetize an audience at the same time that it affects the liberty of expression of independent producers as it creates a strong disincentive for them. On the contrary, this system incentivizes the bigger production companies to claim copyright on content to which they hold no rights.

As that summary notes, it’s not just that material was blocked without justification. Compounding the problem are appeal processes that are biased against creators, and a system that is rigged in favor of Big Content to the point where companies can falsely claim copyright on the work of others. The Fundación Karisma report is particularly valuable because it describes what has been happening in Colombia, rounding out other work that typically looks at the situation in the US and EU.

Source: Study Shows That Copyright Filters Harm Creators Rather Than Help Them | Techdirt

Google now lets you request the removal of search results that contain personal data

Google is releasing a tool that makes it easier to remove search results containing your address, phone number and other personally identifiable information, 9to5Google has reported. It first revealed the “results about you” feature at I/O 2022 in May, describing it as a way to “help you easily control whether your personally-identifiable information can be found in Search results.”

If you see a result with your phone number, home address or email, you can click on the three-dot menu at the top right. That opens the usual “About this result” panel, but it now contains a new “Remove result” option at the bottom of the screen. A dialog states that if the result contains one of those three things, “we can review your request more quickly.”

[…]

“It’s important to note that when we receive removal requests, we will evaluate all content on the web page to ensure that we’re not limiting the availability of other information that is broadly useful, for instance in news articles. And of course, removing contact information from Google Search doesn’t remove it from the web, which is why you may wish to contact the hosting site directly, if you’re comfortable doing so.”

[…]

Source: Google now lets you request the removal of search results that contain personal data | Engadget