EU Bolsters Net Neutrality With Ruling Against Zero Rating

The European Union’s top court has flipped the bird to German mobile network operators Telekom Deutschland and Vodafone, ruling in two separate judgements that their practice of exempting certain services from data caps violated the bloc’s net neutrality rules.

“Zero rating” is when service providers offer customers plans that exempt certain data-consuming services (be it Spotify, Netflix, gaming, or whatever) from contributing towards data caps. Very often, those services are commercial partners of the provider, or even part of the same massive media conglomerate, allowing the provider to exert pressure on customers to use their data in a way that profits them further. This has the convenient benefit of making it easier for providers to keep ridiculous fees for data overages in place while punishing competing services that customers might use more if the zero-rating scheme wasn’t in place. No one wins, except for the telecom racket.

Net neutrality is the principle that telecom providers should treat all data flowing over their networks equally, not prioritizing one service over the other for commercial gain. As Fortune reported, the version of net neutrality rules passed in the European Union in 2015 was at the time weaker than Barack Obama-era rules in the U.S., as they didn’t explicitly ban zero rating. That’s no longer the case, as Donald Trump appointees at the Federal Communications Commission nuked the U.S.’s net neutrality rules in 2017, and a series of subsequent regulatory decisions and court rulings in the EU narrowed the scope of zero-rating practices there.

In 2016, EU regulators found that zero rating would be allowed so long as the zero-rated services were also slowed down when a customer ran up against a data cap, according to Fortune. In 2020, the Court of Justice of the European Union (CJEU) confirmed that interpretation and found it was illegal to block or slow down data after a user hit their cap on the basis that a particular service wasn’t part of a zero-rating deal. Still, carriers in the EU have continued to offer zero-rating plans, relying on perceived loopholes in the law.

The CJEU ruled on two separate cases involving Telekom and Vodafone on Thursday, which according to Reuters were brought by Germany’s Federal Network Agency (BNetzA) regulatory agency and VZBV consumer association respectively. At issue in the Telekom case was its “StreamOn” service, which exempts streaming services that work with the company from counting towards data caps—and throttles all video streaming, regardless of whether it’s from one of the StreamOn partners, when the cap is hit. The Vodafone case involved its practice of counting zero-rated services or mobile hotspot traffic towards data cap—advertising those plans with names like “Music Pass” or “Video Pass,” according to Engadget—when a customer leaves Germany to travel somewhere else in the EU.

Both of the companies’ plans violated net neutrality principles, the CJEU found, in a completely unambiguous decision titled “‘Zero tariff options are contrary to the regulation on open internet access.“ Fortune wrote that BNetzA has already concluded that the court’s decision means that Telekom will likely not be able to continue StreamOn in its “current form.”

“By today’s judgments, the Court of Justice notes that a ‘zero tariff’ option, such as those at issue in the main proceedings, draws a distinction within Internet traffic, on the basis of commercial considerations, by not counting towards the basic package traffic to partner applications,” the CJEU told media outlets in a statement. “Such a commercial practice is contrary to the general obligation of equal treatment of traffic, without discrimination or interference, as required by the regulation on open Internet access.”

The court added, “Since those limitations on bandwidth, tethering or on use when roaming apply only on account of the activation of the ‘zero tariff’ option, which is contrary to the regulation on open Internet access, they are also incompatible with EU law.”

Source: EU Bolsters Net Neutrality With Ruling Against Zero Rating

Sky Broadband sends Subscribers browsing data through to Premier League without user knowledge or consent

UK ISP Sky Broadband is monitoring the IP addresses of servers suspected of streaming pirated content to subscribers and supplying that data to an anti-piracy company working with the Premier League. That inside knowledge is then processed and used to create blocklists used by the country’s leading ISPs, to prevent subscribers from watching pirated events.

[…]

In recent weeks, an anonymous source shared a small trove of information relating to the systems used to find, positively identity, and then ultimately block pirate streams at ISPs. According to the documents, the module related to the Premier League work is codenamed ‘RedBeard’.

The activity appears to start during the week football matches or PPV events take place. A set of scripts at anti-piracy company Friend MTS are tasked with producing lists of IP addresses that are suspected of being connected to copyright infringement. These addresses are subsequently dumped to Amazon S3 buckets and the data is used by ISPs to block access to infringing video streams, the documents indicate.

During actual event scanning, content is either manually or fingerprint matched, with IP addresses extracted from DNS information related to hostnames in media URLs, load balancers, and servers hosting Electronic Program Guides (EPG), all of which are used by unlicensed IPTV services.

Confirmed: Sky is Supplying Traffic Data to Assist IPTV Blocking

The big question then is how the Premier League’s anti-piracy partner discovers the initial server IP addresses that it subsequently puts forward for ISP blocking.

According to documents reviewed by TF, information comes from three sources – the anti-piracy company’s regular monitoring (which identifies IP addresses and their /24 range), manually entered IP addresses (IP addresses and ports), and a third, potentially more intriguing source – ISPs themselves.

“ISPs provide lists of Top Talker IP addresses, these are the IP addresses that they see on their network which many consumers are receiving a large sum of bandwidth from,” one of the documents reveals.

“The IP addresses are the uploading IP address which host information which the ISP’s customers are downloading information from. They are not the IP addresses of the ISP’s customer’s home internet connections.”

The document revealing this information is not dated but other documents in the batch reference dates in 2021. At the time of publishing date, the document indicates that ISP cooperation is currently limited to Sky Broadband only. TorrentFreak asked Friend MTS if that remains the case or whether additional ISPs are now involved.

[…]

Source: Sky Subscribers’ Piracy Habits Directly Help Premier League Block Illegal Streams * TorrentFreak

Apple stalls CSAM auto-scan on devices after ‘feedback’ from everyone on Earth – will still scan all your pics at some point

Apple on Friday said it intends to delay the introduction of its plan to commandeer customers’ own devices to scan their iCloud-bound photos for illegal child exploitation imagery, a concession to the broad backlash that followed from the initiative.

“Previously we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them and to help limit the spread of Child Sexual Abuse Material,” the company said in a statement posted to its child safety webpage.

“Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

[…]

Apple – rather than actually engaging with the security community and the public – published a list of Frequently Asked Questions and responses to address the concern that censorious governments will demand access to the CSAM scanning system to look for politically objectionable images.

“Could governments force Apple to add non-CSAM images to the hash list?” the company asked in its interview of itself, and then responded, “No. Apple would refuse such demands and our system has been designed to prevent that from happening.”

Apple however has not refused government demands in China with regard to VPNs or censorship. Nor has it refused government demands in Russia, with regard to its 2019 law requiring pre-installed Russian apps.

Tech companies uniformly say they comply with all local laws. So if China, Russia, or the US were to pass a law requiring on-device scanning to be adapted to address “national security concerns” or some other plausible cause, Apple’s choice would be to comply or face the consequences – it would no longer be able to say, “We can’t do on-device scanning.”

Source: Apple stalls CSAM auto-scan on devices after ‘feedback’ from everyone on Earth • The Register

Lenovo pops up tips on its tablets. And by tips, Lenovo means: Unacceptable ads

Lenovo has come under fire for the Tips application on its tablets, which has been likened to indelible adware that forces folks to view ads.

One customer took to the manufacturer’s support forum late last month to say they were somewhat miffed to see an ad suddenly appear on screen to join Amazon Music on their Android-powered Lenovo Tab P11. The advertisement was generated as a push notification by the bundled Tips app.

“There is no option to dismiss,” the fondleslab fondler sighed. “You have to click to find out more. Further, these notifications cannot be disabled, nor can the Lenovo ‘Tips’ app be disabled.”

They went on to say: “This is not a tip. This is a push that is advertising a paid service. I loathe this sort of thing.”

Another chipped in: “I have a Lenovo Tab that also has this bloatware virus installed. There’s no way to disable the adverts (they call the ads tips, they’re not, they’re adverts for Amazon music etc.) This is ridiculous, Lenovo, I didn’t spend £170 on a tablet to be pumped with ads. Will not buy another Lenovo product.”

[…]

Source: Lenovo pops up tips on its tablets. And by tips, Lenovo means: Unacceptable ads • The Register

Huge GTA San Andreas Mod Because Of Take-Two Harassment

After months of Take-Two Interactive attacking and fighting GTA modders, the folks behind the long-in-development San Andreas mod, GTA Underground, have killed the project and removed it from the web over “increasing hostility” from Take-Two and fears of further legal problems.

Over the last few months, Take-Two Interactive — the parent company of GTA devs Rockstar Games —has gone on a digital murder spree, sending multiple takedown notices to get old 3D-era GTA mods and source ports removed from the internet. The publisher is also suing the creators behind reverse-engineered source ports of Vice City and GTA III. As a result of this hostility, GTA Underground lead developer dkluin wrote in a post yesterday on the GTAForums that they and the other modders working on the project were now “officially ceasing the development” of GTA: Underground.

“Due to the increasing hostility towards the modding community and imminent danger to our mental and financial well-being,” explained dkluin, “We sadly announce that we are officially ceasing the development of GTA: Underground and will be shortly taking all official uploads offline.”

Dkluin also thanked the community for the support they received over the last six years and mentioned all the “incredible work” that went into the mod and the “great times” the team experienced working on it together. A final video, simply named “The End.” was uploaded today on the modding team’s YouTube channel.

GTA Underground is a mod created for GTA San Andreas with the goal of merging all of the previous GTA maps into one mega environment. The mod even aimed to bring other cities from non-GTA games developed by Rockstar into San Andreas, including the cities featured in Bully and Manhunt.

The mod had already faced some problems from Take-Two in July. As result, it was removed from ModDB. It is now removed from all other official sources and sites.

In 2018, Kotaku interviewed dkluin about the mod and all the work going into it. He had started development on it back in 2014, when he was only 14 years old. GTA Underground isn’t a simple copy-and-paste job, instead, the modders added AI and traffic routines to every map, making them fully playable as GTA cities. The team also had plans to add more cities to the game, including their own custom creations.

[…]

Source: Fan Dev Shuts Down Huge GTA San Andreas Mod Because Of Take-Two

Way to piss off your fan base

Judge Says an AI Can’t Be Listed as an Inventor on a Patent because they are not people

U.S. federal judge Leonie Brikema ruled this week that an AI can’t be listed as an inventor on a U.S. patent under current law. The case was brought forward by Stephen Thaler, who is part of the Artificial Inventor Project, an international initiative that argues that an AI should be allowed to be listed as an inventor in a patent (the owner of the AI would legally own the patent).

Thaler sued the U.S. Patent and Trademark Office after it denied his patent applications because he had listed the AI named DABUS as the inventor of a new type of flashing light and a beverage container. In various responses spanning several months, the Patent Office explained to Thaler that a machine does not qualify as an inventor because it is not a person. In fact, the machine is a tool used by people to create inventions, the agency maintained.

Brikema determined that the Patent Office correctly enforced the nation’s patent laws and pointed out that it basically all boils down to the everyday use of language. In the latest revision of the nation’s patent law in 2011, Congress explicitly defined an inventor as an “individual.” The Patent Act also references an inventor using words such as “himself” and herself.”

“By using personal pronouns such as ‘himself or herself’ and the verb ‘believes’ in adjacent terms modifying ‘individual,’ Congress was clearly referencing a natural person,” Brikema said in her ruling, which you can read in full at the Verge. “Because ‘there is a presumption that a given term is used to mean the same thing throughout a statute,’ the term ‘individual’ is presumed to have a persistent meaning throughout the Patent Act.”

[…]

“As technology evolves, there may come a time when artificial intelligence reaches a level of sophistication such that might satisfy accepted meanings of inventorship. But that time has not yet arrived, and, if it does, it will be up to Congress to decide how, if it at all, it wants to expand the scope of patent law,” Brikema said.

Source: Judge Says an AI Can’t Be Listed as an Inventor on a Patent

Does that mean gender neutral people can’t be inventors either?

Gorgeous Hand-Drawn Game Guides Kickstarter Cancelled By Nintendo – copyright is seriously broken

Hand-Drawn Game Guides creator Philip Summers knew it was a legal risk to launch his gorgeous, story book-style Nintendo game guides on Kickstarter, but it was a risk he was willing to take. When you pour through all the gorgeous artwork of his unofficial Metroid, Contra, Ninja Gaiden and Legend of Zelda guides, you can see why.

The Kickstarter for the project showed off a range of fantastic, full-colour comic walkthroughs designed to take players through the complex missions and goals of each featured game. According to Summers, the intention behind these guides was to replicate the feeling of leafing through a good game guide as a child, with all the wonder and spectacle that used to go along with it.

But while the game guides look absolutely stunning — and were a major success on Kickstarter, raking in over $300,000 — the project has now been cancelled, courtesy of Nintendo.

Image for article titled Gorgeous Hand-Drawn Game Guides Kickstarter Cancelled By Nintendo
Image: Hand-Drawn Game Guides

In a recent update, Summers shared the grim news that the books would no longer go into production.

“Tonight I pulled the plug on the Hand-Drawn Game Guides Kickstarter. Yes, for exactly the reason you think it’s for,” he said in an update on Kickstarter. “I had hoped that I could successfully navigate any legal trouble, but alas I wasn’t able to do so.”

For fans of the project, it’s a major bummer — but Summers says he’s still grateful for the experience.

“Of course I’m disappointed, but I completely understand why this happened,” he explained. “It’s okay. I’m not mad.”

For now, all orders for the game guides will be cancelled, although Summers says he’ll find out whether the project is truly dead in the water “in the coming days”. Backers can expect a cancellation email shortly if they don’t already have one, and all money will be refunded via your payment method.

It really is a disappointing turn of events.

While these game guides were always going to have IP issues with Nintendo being notoriously strict about protecting their assets, each book is a lavish work of art, created after painstaking hours of work. Summers’ talent and passion is clear in every page and frankly, his game guides look far better than anything else on the market.

Here’s to hoping Summers is still able to produce these guides in some capacity, whether that be through official channels or an entire rework of the project.

These Hand-Drawn Game Guides deserve their time in the sun, and a place on all our shelves, regardless of Nintendo’s efforts to nuke the project.

Source: Gorgeous Hand-Drawn Game Guides Kickstarter Cancelled By Nintendo

Texan Anti-Abortion Snitching Site Violates GoDaddy’s Rules – help bring it down

Even if you don’t live in Texas, you’ve likely heard about the state’s draconian abortion restrictions that officially went into effect on Wednesday. The so-called “Heartbeat Bill,” aka Senate Bill 8, makes it fully illegal for anyone—friends, family, doctors—across Texas to help women access an abortion in the state after their sixth week of pregnancy.

You might have also seen the digital tipline that’s been set up to snitch on anyone violating the new law. The site was launched about a month ago by Texas Right To Life, a well-funded player in the world of anti-abortion politics.

“Any Texan can bring a lawsuit against an abortionist or someone aiding and abetting an abortion after six weeks,” the website reads. “If these individuals are proved to be violating the law, they have to pay a fine of at least $10,000.” It’s worth noting here that because “aiding and abetting” is such a vague term, others have used the impending law to not only justify going after the doctors or clinicians performing these medical procedures but anyone who helps women get an abortion in any way. This includes driving a friend to the clinic, or lending someone money so they can get an abortion they can’t afford on their own.

As you might expect with a tipline like this, people didn’t waste any time flooding the line with the vilest stuff you can think of: fake claims, furry porn, pictures of Shrek, you name it.

Unfortunately, overloading the site with pictures of everyone’s favorite ogre wasn’t enough to knock it from the web, nor were the multiple denial-of-service attacks that slammed the site on the eve before the bill was set to go into action. But there is another route people can take: pleading with the site’s hosting provider. In this case, the registrar is GoDaddy—a company that’s historically known for being kind of terrible all around, but also one with a slew of rules for what its sites can be used for. In the company’s terms of service for users, GoDaddy mandates that its site owners cannot use a GoDaddy-hosted site to:

collect or harvest (or permit anyone else to collect or harvest) any User Content (as defined below) or any non-public or personally identifiable information about another User or any other person or entity without their express prior written consent.

The ToS also states that GoDaddy’s customers cannot use its platform in a manner that “violates the privacy or publicity rights of another User or any other person or entity, or breaches any duty of confidentiality that you owe to another User or any other person or entity.” In either case, a site solely set up to out people who try to help someone attain a sensitive, stigmatized medical procedure probably fall under this domain.

GoDaddy has its own specific tipline set up for users to reach when they see a site falling afoul of the company’s privacy rules: privacy@godaddy.com. People can also file out an abuse report with the platform, and let GoDaddy know that they’ve come across “content that displays personal information.” While the examples that GoDaddy gives in the form are sites listing people’s social security or credit card numbers, the Texas tipline is a pretty clear privacy violation of a different sort.

Aside from violating the privacy of god knows how many women, along with their friends, family, and doctors, the site also apparently violates the privacy of people submitting tips. A Gizmodo analysis of the webpage for submitting tips found that when these memos are “anonymously” submitted, the site covertly harvests the IP address of whoever submits the tip via a hidden field.

[…]

Source: Anti-Abortion Site Violates GoDaddy’s Rules

Australia: Unprecedented surveillance bill rushed through parliament in 24 hours.

The Australian government has been moving towards a surveillance state for some years already. Now they are putting the nail in the coffin with an unprecedented surveillance bill that allows the police to hack your device, collect or delete your data, and take over your social media accounts; without sufficient safeguards to prevent abuse of these new powers.

This month the Australian government has passed a sweeping surveillance bill, worse than any similar legislation in any other five eye country.

The Surveillance Legislation Amendment (Identify and Disrupt) Bill 2020 gives the Australian Federal Police (AFP) and the Australian Criminal Intelligence Commission (ACIC) three new powers for dealing with online crime:

  1. Data disruption warrant: gives the police the ability to “disrupt data” by modifying, copying, adding, or deleting it.
  2. Network activity warrant: allows the police to collect intelligence from devices or networks that are used, or likely to be used, by those subject to the warrant
  3. Account takeover warrant: allows the police to take control of an online account (e.g. social media) for the purposes of gathering information for an investigation.

The two Australian law enforcement bodies AFP and ACIC will soon have the power to modify, add, copy, or delete your data should you become a suspect in the investigation of a serious crime.

What makes this legislation even worse is that there is no judicial oversight. A data disruption or network activity warrant could be issued by a member of the Administrative Appeals Tribunal, a judge’s warrant is not needed.

Australian companies obliged to comply

When presented with such warrant from the Administrative Appeals Tribunal, Australian companies, system administrators etc. must comply, and actively help the police to modify, add, copy, or delete the data of a person under investigation. Refusing to comply could have one end up in jail for up to ten years, according to the new bill.

[…]

Politicians justify the need for the bill by stating that it is intended to fight child exploitation (CSAM) and terrorism. However, the bill itself enables law enforcement to investigate any “serious Commonwealth offence” or “serious State offence that has a federal aspect”.

Source: Australia: Unprecedented surveillance bill rushed through parliament in 24 hours.

As soon as it says a law is against Child Porn you know it’s going to be used for a whole load of other things that wouldn’t stand up to public inspection. But who can be against anti-Child Porn stuff, right?

After 18 Years, SCO’s IBM Litigation May Be Settled for $14.5 Million (is this the last SCO court case though? it won’t DIE!!!!)

Slashdot has confirmed with the U.S. Bankruptcy Court for the District of Delaware that after 18 years of legal maneuvering, SCO’s bankruptcy case (first filed in 2007) is now “awaiting discharge.”

Long-time Slashdot reader rkhalloran says they know the reason: Papers filed 26 Aug by IBM & SCOXQ in U.S. Bankruptcy Court in Delaware for a proposed settlement, Case 07-11337-BLS Doc 1501:

By the Settlement Agreement, the Trustee has reached a settlement with IBM that resolves all of the remaining claims at issue in the Utah Litigation (defined below). The Settlement Agreement is the culmination of extensive arm’s length negotiation between the Trustee and IBM.

Under the Settlement Agreement, the Parties have agreed to resolve all disputes between them for a payment to the Trustee, on behalf of the Estates, of $14,250,000. For the reasons set forth more fully below, the Trustee submits the Settlement Agreement and the settlement with IBM are in the best interests of the Estates and creditors, are well within the range of reasonableness, and should be approved.
The proposed order would include “the release of the Estates’ claims against IBM and vice versa” (according to this PDF attributed to SCO Group and IBM uploaded to scribd.com). And one of the reasons given for the proposed settlement? “The probability of the ultimate success of the Trustee’s claims against IBM is uncertain,” according to an IBM/SCO document on Scribd.com titled Trustee’s motion: For example, succeeding on the unfair competition claims will require proving to a jury that events occurring many years ago constituted unfair competition and caused SCO harm. Even if SCO were to succeed in that effort, the amount of damages it would recover is uncertain and could be significantly less than provided by the Settlement Agreement. Such could be the case should a jury find that (1) the amount of damage SCO sustained as a result of IBM’s conduct is less than SCO has alleged, (2) SCO’s damages are limited by a $5 million damage limitation provision in the Project Monterey agreement, or (3) some or all of IBM’s Counterclaims, alleging millions of dollars in damages related to IBM’s Linux activities and alleged interference by SCO, are meritorious.

Although the Trustee believes the Estates would ultimately prevail on claims against IBM, a not insignificant risk remains that IBM could succeed with its defenses and/or Counterclaims
The U.S. Bankruptcy Court for the District of Delaware told Slashdot that the first meeting of the creditors will be held on September 22nd, 2021.

Source: After 18 Years, SCO’s IBM Litigation May Be Settled for $14.5 Million – Slashdot

Facebook used facial recognition without consent 200,000 times, says South Korea’s data watchdog. Netflix fined too and Google scolded.

Facebook, Netflix and Google have all received reprimands or fines, and an order to make corrective action, from South Korea’s government data protection watchdog, the Personal Information Protection Commission (PIPC).

The PIPC announced a privacy audit last year and has revealed that three companies – Facebook, Netflix and Google – were in violations of laws and had insufficient privacy protection.

Facebook alone was ordered to pay 6.46 billion won (US$5.5M) for creating and storing facial recognition templates of 200,000 local users without proper consent between April 2018 and September 2019.

Another 26 million won (US$22,000) penalty was issued for illegally collecting social security numbers, not issuing notifications regarding personal information management changes, and other missteps.

Facebook has been ordered to destroy facial information collected without consent or obtain consent, and was prohibited from processing identity numbers without legal basis. It was also ordered to destroy collected data and disclose contents related to foreign migration of personal information. Zuck’s brainchild was then told to make it easier for users to check legal notices regarding personal information.

[…]

Netflix’s fine was a paltry 220 million won (US$188,000), with that sum imposed for collecting data from five million people without their consent, plus another 3.2 million won (US$2,700) for not disclosing international transfer of the data.

Google got off the easiest, with just a “recommendation” to improve its personal data handling processes and make legal notices more precise.

The PPIC said it is not done investigating methods of collecting personal information from overseas businesses and will continue with a legal review.

[…]

Source: Facebook used facial recognition without consent 200,000 times, says South Korea’s data watchdog • The Register

OnlyFans CEO on why site is banning porn: ‘The short answer is banks’

After facing criticism over the app’s recent decision to prohibit sexually explicit content starting in October, OnlyFans CEO Tim Stokely pointed the finger at banks for the policy change.

In an interview with the Financial Times published Tuesday, Stokely singled out a handful of banks for “unfair” treatment, saying they made it “difficult to pay our creators.”

Source: OnlyFans CEO on why site is banning porn: ‘The short answer is banks’ – CNET

Samsung Galaxy Z Fold 3’s camera breaks after unlocking the bootloader

[…]

Samsung already makes it extremely difficult to have root access without tripping the security flags, and now the Korean OEM has introduced yet another roadblock for aftermarket development. In its latest move, Samsung disables the cameras on the Galaxy Z Fold 3 after you unlock the bootloader.

Knox is the security suite on Samsung devices, and any modifications to the device will trip it, void your warranty, and disable Samsung Pay permanently. Now, losing all the Knox-related security features is one thing, but having to deal with a broken camera is a trade-off that many will be unwilling to make. But that’s exactly what you’ll have to deal with if you wish to unlock the bootloader on the Galaxy Z Fold 3.

According to XDA Senior Members 白い熊 and ianmacd, the final confirmation screen during the bootloader unlock process on the Galaxy Z Fold 3 mentions that the operation will cause the camera to be disabled. Upon booting up with an unlocked bootloader, the stock camera app indeed fails to operate, and all camera-related functions cease to function, meaning that you can’t use facial recognition either. Anything that uses any of the cameras will time out after a while and give errors or just remain dark, including third-party camera apps.

Thanks to XDA Senior Member ianmacd for the images!

It is not clear why Samsung chose the way on which Sony walked in the past, but the actual problem lies in the fact that many will probably overlook the warning and unlock the bootloader without knowing about this new restriction. Re-locking the bootloader does make the camera work again, which indicates that it’s more of a software-level obstacle. With root access, it could be possible to detect and modify the responsible parameters sent by the bootloader to the OS to bypass this restriction. However, according to ianmacd, Magisk in its default state isn’t enough to circumvent the barrier.

[…]

Source: Samsung Galaxy Z Fold 3’s camera breaks after unlocking the bootloader

China puts continuous consent at the center of data protection law

[…] The new “Personal Information Protection Law of the People’s Republic of China” comes into effect on November 1st, 2021, and comprises eight chapters and 74 articles

[…]

The Cyberspace Administration of China (CAC) said, as translated from Mandarin using automated tools:

On the basis of relevant laws, the law further refines and perfects the principles and personal information processing rules to be followed in the protection of personal information, clarifies the boundaries of rights and obligations in personal information processing activities, and improves the work systems and mechanisms for personal information protection.

The document outlines standardized data-handling processes, defines rules on big data and large-scale operations, regulates those processing data, addresses data that flows across borders, and outlines legal enforcement of its provisions. It also clarifies that state agencies are not immune from these measures.

The CAC asserts that consenting to collection of data is at the core of China’s laws and the new legislation requires continual up-to-date fully informed advance consent of the individual. Parties gathering data cannot require excessive information nor refuse products or services if the individual disapproves. The individual whose data is collected can withdraw consent, and death doesn’t end the information collector’s responsibilities or the individual’s rights – it only passes down the right to control the data to the deceased subject’s family.

Information processors must also take “necessary measures to ensure the security of the personal information processed” and are required to set up compliance management systems and internal audits.

To collect sensitive data, like biometrics, religious beliefs, and medical, health and financial accounts, information needs to be necessary, for a specific purpose and protected. Prior to collection, there must be an impact assessment, and the individual should be informed of the collected data’s necessity and impact on personal rights.

Interestingly, the law seeks to prevent companies from using big data to prey on consumers – for example setting transaction prices – or mislead or defraud consumers based on individual characteristics or habits. Furthermore, large-scale network platforms must establish compliance systems, publicly self-report their efforts, and outsource data-protective measures.

And if data flows across borders, the data collectors must establish a specialized agency in China or appoint a representative to be responsible. Organizations are required to offer clarity on how data is protected and its security assessed.

Storing data overseas does not exempt a person or company from compliance to any of the Personal Information Protection Laws.

In the end, supervision and law enforcement falls to the Cyberspace Administration and relevant departments of the State Council.

[…]

Source: China puts continuous consent at the center of data protection law • The Register

It looks like China has had a good look at the EU Cybersecurity Act and enhanced on that. All this looks very good and of course even better that they mandate the Chinese governmental agencies to also follow this, but is it true? With all the governmental AI systems, cameras and facial recognition systems tracking ethnic minorities (such as the Uyghurs) and setting good behaviour scores, how will these be affected? Somehow I doubt they will dismantle the pervasive surveillance apparatus they have. So even if the laws sound excellent, the proof is in the pudding.

Sensitive Data On Afghan Allies Collected By The US Military Is Now In The Hands Of The Taliban

The problem with harvesting reams of sensitive data is that it presents a very tempting target for malicious hackers, enemy governments, and other wrongdoers. That hasn’t prevented anyone from collecting and storing all of this data, secure only in the knowledge this security will ultimately be breached.

[…]

The Taliban is getting everything we left behind. It’s not just guns, gear, and aircraft. It’s the massive biometric collections we amassed while serving as armed ambassadors of goodwill. The stuff the US government compiled to track its allies are now handy repositories that will allow the Taliban to hunt down its enemies. Ken Klippenstein and Sara Sirota have more details for The Intercept.

The devices, known as HIIDE, for Handheld Interagency Identity Detection Equipment, were seized last week during the Taliban’s offensive, according to a Joint Special Operations Command official and three former U.S. military personnel, all of whom worried that sensitive data they contain could be used by the Taliban. HIIDE devices contain identifying biometric data such as iris scans and fingerprints, as well as biographical information, and are used to access large centralized databases. It’s unclear how much of the U.S. military’s biometric database on the Afghan population has been compromised.

At first, it might seem that this will only allow the Taliban to high-five each other for making the US government’s shit list. But it wasn’t just used to track terrorists. It was used to track allies.

While billed by the U.S. military as a means of tracking terrorists and other insurgents, biometric data on Afghans who assisted the U.S. was also widely collected and used in identification cards, sources said.

[…]

Source: Sensitive Data On Afghan Allies Collected By The US Military Is Now In The Hands Of The Taliban | Techdirt

Distributed Denial of Secrets – the new wikileaks

Distributed Denial of Secrets is a journalist 501(c)(3) non-profit devoted to enabling the free transmission of data in the public interest.

We aim to avoid political, corporate or personal leanings, to act as a beacon of available information. As a transparency collective, we don’t support any cause, idea or message beyond ensuring that information is available to those who need it most—the people.

You can read more about our collective, and our decision to embrace all sources of information. At its core, however, our mission is simple:

Veritatem cognoscere ruat cælum et pereat mundus

Source: Distributed Denial of Secrets

Apple’s Not Digging Itself Out of This One: scanning your pictures is dangerous and flawed

Online researchers say they have found flaws in Apple’s new child abuse detection tool that could allow bad actors to target iOS users. However, Apple has denied these claims, arguing that it has intentionally built safeguards against such exploitation.

It’s just the latest bump in the road for the rollout of the company’s new features, which have been roundly criticized by privacy and civil liberties advocates since they were initially announced two weeks ago. Many critics view the updates—which are built to scour iPhones and other iOS products for signs of child sexual abuse material (CSAM)—as a slippery slope towards broader surveillance.

The most recent criticism centers around allegations that Apple’s “NeuralHash” technology—which scans for the bad images—can be exploited and tricked to potentially target users. This started because online researchers dug up and subsequently shared code for NeuralHash as a way to better understand it. One Github user, AsuharietYgvar, claims to have reverse-engineered the scanning tech’s algorithm and published the code to his page. Ygvar wrote in a Reddit post that the algorithm was basically available in iOS 14.3 as obfuscated code and that he had taken the code and rebuilt it in a Python script to assemble a clearer picture of how it worked.

Problematically, within a couple of hours, another researcher said they were able to use the posted code to trick the system into misidentifying an image, creating what is called a “hash collision.”

[…]

However, “hash collisions” involve a situation in which two totally different images produce the same “hash” or signature. In the context of Apple’s new tools, this has the potential to create a false-positive, potentially implicating an innocent person for having child porn, critics claim. The false-positive could be accidental or intentionally triggered by a malicious actor.

[…]

ost alarmingly, researchers noted that it could be easily co-opted by a government or other powerful entity, which might repurpose its surveillance tech to look for other kinds of content. “Our system could easily be repurposed for surveillance and censorship,” writes Mayer and his research partner, Anunay Kulshrestha, in an op-ed in the Washington Post. “The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching data base, and the person using that service would be none the wiser.”

The researchers were “so disturbed” by their findings that they subsequently declared the system dangerous, and warned that it shouldn’t be adopted by a company or organization until more research could be done to curtail the potential dangers it presented. However, not long afterward, Apple announced its plans to roll out a nearly identical system to over 1.5 billion devices in an effort to scan for CSAM. The op-ed ultimately notes that Apple is “gambling with security, privacy and free speech worldwide” by implementing a similar system in such a hasty, slapdash way.

[…]

pple’s decision to launch such an invasive technology so swiftly and unthinkingly is a major liability for consumers. The fact that Apple says it has built safety nets around this feature is not comforting at all, he added.

“You can always build safety nets underneath a broken system,” said Green, noting that it doesn’t ultimately fix the problem. “I have a lot of issues with this [new system]. I don’t think it’s something that we should be jumping into—this idea that local files on your device will be scanned.” Green further affirmed the idea that Apple had rushed this experimental system into production, comparing it to an untested airplane whose engines are held together via duct tape. “It’s like Apple has decided we’re all going to go on this airplane and we’re going to fly. Don’t worry [they say], the airplane has parachutes,” he said.

[…]

Source: Apple’s Not Digging Itself Out of This One

Your Credit Score Should Be Based On Your Web History, IMF Says

In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions. At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.

The concept of using your web history to inform credit ratings is framed around the notion that lenders rely on hard-data that might obscure the worthiness of a borrower or paint an unnecessarily dire picture during hard times. Citing soft-data points like “the type of browser and hardware used to access the internet, the history of online searches and purchases” that could be incorporated into evaluating a borrower, the researchers believe that when a lender has a more intimate relationship with the potential client’s history, they might be more willing to cut them some slack. […] But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down. The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice.

Source: Your Credit Score Should Be Based On Your Web History, IMF Says – Slashdot

So now the banks want your browsing history. They don’t want to miss out on the surveillance economy.

How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives – disable photo backups. No alternative offered, sorry.

Photos that are sent in messaging apps like WhatsApp or Telegram aren’t scanned by Apple. Still, if you don’t want Apple to do this scanning at all, your only option is to disable iCloud Photos. To do that, open the “Settings” app on your iPhone or iPad, go to the “Photos” section, and disable the “iCloud Photos” feature. From the popup, choose the “Download Photos & Videos” option to download the photos from your iCloud Photos library.

Image for article titled How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives
Screenshot: Khamosh Pathak

You can also use the iCloud website to download all photos to your computer. Your iPhone will now stop uploading new photos to iCloud, and Apple won’t scan any of your photos now.

Looking for an alternative? There really isn’t one. All major cloud-backup providers have the same scanning feature, it’s just that they do it completely in the cloud (while Apple uses a mix of on-device and cloud scanning). If you don’t want this kind of photo scanning, use local backups, NAS, or a backup service that is completely end-to-end encrypted.

Source: How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives

Zoom to pay $85M for lying about encryption and sending data to Facebook and Google

Zoom has agreed to pay $85 million to settle claims that it lied about offering end-to-end encryption and gave user data to Facebook and Google without the consent of users. The settlement between Zoom and the filers of a class-action lawsuit also covers security problems that led to rampant “Zoombombings.”

The proposed settlement would generally give Zoom users $15 or $25 each and was filed Saturday at US District Court for the Northern District of California. It came nine months after Zoom agreed to security improvements and a “prohibition on privacy and security misrepresentations” in a settlement with the Federal Trade Commission, but the FTC settlement didn’t include compensation for users.

As we wrote in November, the FTC said that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers. In reality, “Zoom did not provide end-to-end encryption for any Zoom Meeting that was conducted outside of Zoom’s ‘Connecter’ product (which are hosted on a customer’s own servers), because Zoom’s servers—including some located in China—maintain the cryptographic keys that would allow Zoom to access the content of its customers’ Zoom Meetings,” the FTC said. In real end-to-end encryption, only the users themselves have access to the keys needed to decrypt content.

[…]

Source: Zoom to pay $85M for lying about encryption and sending data to Facebook and Google | Ars Technica

Game Dev Turns Down $500k Exploitative Contract, explains why – looks like music industry contracts

Receiving a publishing deal from an indie publisher can be a turning point for an independent developer. But when one-man team Jakefriend was approached with an offer to invest half a million Canadian dollars into his hand-drawn action-adventure game Scrabdackle, he discovered the contract’s terms could see him signing himself into a lifetime of debt, losing all rights to his game, and even paying for it to be completed by others out of his own money.

In a lengthy thread on Twitter, indie developer Jakefriend explained the reasons he had turned down the half-million publishing deal for his Kickstarter-funded project, Scrabdackle. Already having raised CA$44,552 from crowdfunding, the investment could have seen his game released in multiple languages, with full QA testing, and launched simultaneously on PC and Switch. He just had to sign a contract including clauses that could leave him financially responsible for the game’s completion, while receiving no revenue at all, should he breach its terms.

“I turned down a pretty big publishing contract today for about half a million in total investment,” begins Jake’s thread. Without identifying the publisher, he continues, “They genuinely wanted to work with me, but couldn’t see what was exploitative about the terms. I’m not under an NDA, wanna talk about it?”

Over the following 24 tweets, the developer lays out the key issues with the contract, most especially focusing on the proposed revenue share. While the unnamed publisher would eventually offer a 50:50 split of revenues (albeit minus up to 10% for other sundry costs, including—very weirdly—international sales taxes), this wouldn’t happen until 50% of the marketing spend (approximately CA$200,000/US$159,000) and the entirety of his development funds (CA$65,000 Jake confirms to me via Discord) was recouped by sales. That works out to about 24,000 copies of the game, before which its developer would receive precisely 0% of revenue.

Even then, Scrabdackle’s lone developer explains, the contract made clear there would be no payments until a further 30 days after the end of the next quarter, with a further clause that allowed yet another three month delay beyond that. All this with no legal requirement to show him their financial records.

Should Jake want to challenge the sales data for the game, he’d be required to call for an audit, which he’d have to pay for whether there were issues or not. And should it turn out that there were discrepancies, there’d be no financial penalty for the publisher, merely the requirement to pay the missing amount—which he would have to hope would be enough to cover paying for the audit in the first place.

Another section of the contract explained that should there be disagreement about the direction of the game, the publisher could overrule and bring in a third-party developer to make the changes Jake would not, at Jake’s personal expense. With no spending limit on that figure.

But perhaps most surprising was a section declaring that should the developer be found in breach of the contract—something Jake explains is too ambiguously defined—then they would lose all rights to their game, receive no revenue from its sales, have to repay all the money they received, and pay for all further development costs to see the game completed. And here again there was no upper limit on what those costs could be.

It might seem obvious that no one should ever sign a contract containing clauses just so ridiculous. To be liable—at the publisher’s whim—for unlimited costs to complete a game while also required to pay back all funds (likely already spent), for no income from the game’s sales… Who would ever agree to such a thing? Well, as Jake tells me via Discord, an awful lot of independent developers, desperate for some financial support to finish their project. The contract described in his tweets might sound egregious, but the reality is that most of them offer some kind of awful term(s) for indie game devs.

“My close indie dev friends discuss what we’re able to of contracts frequently,” he says, “and the only thing surprising to them about mine is that it hit all the typical red flags instead of typically most of them. We’re all extremely fatigued and disheartened by how mundane an unjust contract offer is. It’s unfair and it’s tiring.”

Jake makes it clear that he doesn’t believe the people who contacted him were being maliciously predatory, but rather they were simply too used to the shitty terms. “I felt genuinely no sense of wanting to give me a bad deal with the scouts and producers I was speaking to, but I have to assume they are aware of the problems and are just used to that being the norm as well.”

Since posting the thread, Jake tells me he’s heard from a lot of other developers who described the terms to which he objected as, “sadly all-too-familiar.” At one point creator of The Witness, Jonathan Blow, replied to the thread saying, “I can guess who the publisher is because I have seen equivalent contracts.” Except Jake’s fairly certain he’d be wrong.

“The problem is so widespread,” Jake explains, “that when you describe the worst of terms, everyone thinks they know who it is and everyone has a different guess.

While putting this piece together, I reached out to boutique indie publisher Mike Rose of No More Robots, to see if he had seen anything similar, and indeed who he thought the publisher might be. “Honestly, it could be anyone,” he replied via Discord. “What [Jake] described is very much the norm. All of the big publishers you like, his description is all of their contracts.”

This is very much a point that Jake wants to make clear. In fact, it’s why he didn’t identify the publisher in his thread. Rather than to spare their blushes, or harm his future opportunities, Jake explains that he did it to ensure his experience couldn’t be taken advantage of by other indie publishers. “I don’t want to let others with equally bad practices off the hook,” he tells me. “As soon as I say ‘It was SoAndSo Publishing’, everyone else can say, ‘Wow, can’t believe it, glad we’re not like that,’ and have deniability.”

I also reached out to a few of the larger indie publishers, listing the main points of contention in Jake’s thread, to see if they had any comments. The only company that replied by the time of publication was Devolver. I was told,

“Publishing contracts have dozens of variables involved and a developer should rightfully decline points and clauses that make them feel uncomfortable or taken advantage of in what should be an equitable relationship with their partner—publisher, investor, or otherwise. Rev share and recoupment in particular should be weighed on factors like investment, risk, and opportunity for both parties and ultimately land on something where everyone feels like they are receiving a fair shake on what was put forth on the project. While I have not seen the full contract and context, most of the bullet points you placed here aren’t standard practice for our team.”

Where does this leave Jake and the future of Scrabdackle? “The Kickstarter funds only barely pay my costs for the next 10 months,” he tells Kotaku. “So there’s no Switch port or marketing budget to speak of. Nonetheless, I feel more motivated than ever going it alone.”

I asked if he would still consider a more reasonable publishing deal at this point. “This was a hobby project that only became something more when popular demand from an incredible and large community rallied for me to build a crowdfunding campaign…A publisher can offer a lot to an indie project, and a good deal is the difference between gamedev being a year-long stint or a long-term career for me, but that’s not worth the pound of flesh I was asked for.”

Source: Game Dev Turns Down Half Million Dollar Exploitative Contract

For the music industry:

Source: Courtney Love does the math

Source: How much do musicians really make from Spotify, iTunes and YouTube?

Source: How Musicians Make Money — Or Don’t at All — in 2018

Source: Kanye’s Contracts Reveal Dark Truths About the Music Industry

Source: Smiles and tears when “slave contract” controls the lives of K-Pop artists.

Source: Youtube’s support for musicians comes with a catch

Stop using Zoom, Hamburg’s DPA warns state government – The US does not safeguard EU citizen data

Hamburg’s state government has been formally warned against using Zoom over data protection concerns.

The German state’s data protection agency (DPA) took the step of issuing a public warning yesterday, writing in a press release that the Senate Chancellory’s use of the popular videoconferencing tool violates the European Union’s General Data Protection Regulation (GDPR) since user data is transferred to the U.S. for processing.

The DPA’s concern follows a landmark ruling (Schrems II) by Europe’s top court last summer which invalidated a flagship data transfer arrangement between the EU and the U.S. (Privacy Shield), finding U.S. surveillance law to be incompatible with EU privacy rights.

The fallout from Schrems II has been slow to manifest — beyond an instant blanket of legal uncertainty. However, a number of European DPAs are now investigating the use of U.S.-based digital services because of the data transfer issue, in some instances publicly warning against the use of mainstream U.S. tools like Facebook and Zoom because user data cannot be adequately safeguarded when it’s taken over the pond.

German agencies are among the most proactive in this respect. But the EU’s data protection supervisor is also investigating the bloc’s use of cloud services from U.S. giants Amazon and Microsoft over the same data transfer concern.

[…]

The agency asserts that use of Zoom by the public body does not comply with the GDPR’s requirement for a valid legal basis for processing personal data, writing: “The documents submitted by the Senate Chancellery on the use of Zoom show that [GDPR] standards are not being adhered to.”

The DPA initiated a formal procedure earlier, via a hearing, on June 17, 2021, but says the Senate Chancellory failed to stop using the videoconferencing tool. Nor did it provide any additional documents or arguments to demonstrate compliance usage. Hence, the DPA taking the step of a formal warning, under Article 58 (2) (a) of the GDPR.

[…]

Source: Stop using Zoom, Hamburg’s DPA warns state government | TechCrunch

How to Limit Spotify From Tracking You, Because It Knows Too Much – and sells it

Most Spotify users are likely aware the streaming service tracks their listening activity, search history, playlists, and the songs they like or skip—that’s all part of helping the algorithm figure out what you like, right? However, some users may be less OK with how much other data Spotify and its partners are logging.

According to Spotify’s privacy policy, the company tracks:

  • Your name
  • Email address
  • Phone number
  • Date of birth
  • Gender
  • Street address, country, and other GPS location data
  • Login info
  • Billing info
  • Website cookies
  • IP address
  • Facebook user ID, login information, likes, and other data.
  • Device information like accelerometer or gyroscope data, operating system, model, browser, and even some data from other devices on your wifi network.

This information helps Spotify tailor song and artist recommendations to your tastes and is used to improve the in-app user experience, sure. However, the company also uses it to attract advertising partners, who can create personalized ads based on your information. And that doesn’t even touch on the third-party cross-site trackers that are eagerly eyeing your Spotify activity too.

Treating people and their data like a consumable resource is scummy, but it’s common practice for most companies and websites these days, and the common response from the general public is typically a shrug (never mind that a survey of US adults revealed we place a high value on our personal data). However, it’s still a security risk. As we’ve seen repeatedly over the years, all it takes is one poorly-secured server or an unusually skilled hacker to compromise the personal data that companies like Spotify hold onto.

And to top things off, almost all of your Spotify profile’s information is public by default—so anyone else with a Spotify account can easily look you up unless you go out of your way to change your settings.

Luckily, you can limit some of the data Spotify and connected third-party apps collect, and can review the personal information the app has stored. Spotify doesn’t offer that many data privacy options, and many of them are spread out across its web, desktop, and mobile apps, but we’ll show you where to find them all and which ones you should enable for the most private Spotify listening experience possible. You know, relatively.

How to change your Spotify account’s privacy settings

The web player is where to start if you want to tune up your Spotify privacy. Almost all of Spotify’s data privacy settings are found on there, rather than in the mobile or desktop apps.

We’ll start by cutting down on how much personal data you share with Spotify.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Log in to Spotify’s web player on desktop.
  2. Click your user icon then go to Account > Edit profile.
  3. Remove or edit any personal info that you’re able to.
  4. Uncheck “Share my registration data with Spotify’s content providers for marketing purposes.”
  5. Click “Save Changes.”
Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

Next, let’s limit how Spotify uses your personal data for advertising.

  1. Go to Account > Privacy settings.
  2. Turn off “Process my personal data for tailored ads.” Note that you’ll still get just as many ads—and Spotify will still track you—but your personal data will no longer be used to deliver you targeted ads.
  3. Turn off “Process my Facebook data. This will stop Spotify from using your Facebook account data to further refine the ads you hear.

Lastly, go to Account > Apps to review all the external apps linked to your Spotify account and see a list of all devices you’re logged in to. Remove any you don’t need or use anymore.

How to review your Spotify account data

You can also see how much of your personal data Spotify has collected. At the bottom of the Privacy Settings page, there’s an option to download your Spotify data for review. While you can’t remove this data from your account, it shows you a selection of personal information, your listening and search history, and other data the company has collected. Click “Request” to begin the process. Note that it can take up to 30 days for Spotify to get your data ready for download.

How to hide public playlists and listening activity on Spotify

Your Spotify playlists and listening activity are public by default, but you can quickly turn them off or even block certain listening activity in Spotify’s web and desktop apps. While this doesn’t affect Spotify’s data tracking, it’s still a good idea to keep some info hidden if you’re trying to make Spotify as private as possible.

How to turn off Spotify listening activity

Desktop

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Click your profile image and go to Settings > Social
  2. Turn off “Make my new playlists public.”
  3. Turn off “Share my listening activity on Spotify.”

Mobile

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Tap the settings icon in the upper-right of the app.
  2. Scroll down to “Social.”
  3. Disable “Listening Activity.”

How to hide Spotify Playlists

Don’t forget to hide previously created playlists, which are made public by default. This can be done from the desktop, web, and mobile apps.

Mobile

  1. Open the “Your Library” tab.
  2. Select a playlist.
  3. Tap the three-dot icon in the upper-right of the screen.
  4. Select “Make Secret.”

Desktop app and web player

  1. Open a playlist from the library bar on the left.
  2. Click the three-dot icon by the Playlist’s name.
  3. Select “Make Secret.”

How to use Private Listening mode on Spotify

Spotify’s Private Listening mode also hides your listening activity, but you need to enable it manually each time you want to use it.

Mobile

  1. In the app, go to Settings > Social.
  2. Tap “Enable private session.”

Desktop app and web player

There are three ways to enable a Private session on desktop:

  • Click your profile picture then select “Private session.”
  • Or, click the “…” icon in the upper-left and go to File > Private session.
  • Or, go to Settings > Social and toggle “Start a private session to listen anonymously.”

Note that Private sessions only affect what other users see (or don’t see, rather). It doesn’t stop Spotify from tracking your activity—though as Wired points out, Spotify’s Privacy Policy vaguely implies Private Mode “may not influence” your recommendations, so it’s possible some data isn’t tracked while this mode is turned on. It’s better to use the privacy controls outlined in the sections above if you want to change how Spotify collects data.

How to limit third-party cookie tracking in Spotify

Turning on the privacy settings above will help reduce how much data Spotify tracks and uses for advertising and keep some of your Spotify listening history hidden from other users, but you should also take steps to limit how other apps and websites track your Spotify activity.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

The desktop app has built-in cookie blocking controls that can do this:

  1. In the desktop app, click your username in the top right corner.
  2. Go to Settings > Show advanced settings.
  3. Scroll down to “Privacy” and turn on “Block all cookies for this installation of the Spotify desktop app.”
  4. Close and restart the app for the change to take effect.

For iOS and iPad users, you can disable app tracking in your device’s settings. Android users have a similar option, though it’s not as aggressive. And for those listening on the Spotify web player, use browsers with strict privacy controls like Safari, Firefox, or Brave.

The last resort: Delete your Spotify account

Even with all possible privacy settings turned on and Private Listening sessions enabled at all times, Spotify is still tracking your data. If that is absolutely unacceptable to you, the only real option is to delete your account. This will remove all your Spotify data for good—just make sure you download and back up any data you want to import to other services before you go through with it.

  1. Go to the Contact Spotify Support web page and sign in with your Spotify account.
  2. Select the “Account” section.
  3. Click “I want to close my account” from the list of options.
  4. Scroll down to the bottom of the page and click “Close Account.”
  5. Follow the on-screen prompts, clicking “Continue” each time to move forward.
  6. After the final confirmation, Spotify will send you an email with the cancellation link. Click the “Close My Account” button to verify you want to delete your account (this link is only active for 24 hours).

To be clear, we’re not advocating everyone go out and delete their Spotify accounts over the company’s privacy policy and advertising practices, but it’s always important to know how—and why—the apps and websites we use are tracking us. As we said at the top, even companies with the best intentions can fumble your data, unwittingly delivering it into the wrong hands.

Even if you’re cool with Spotify tracking you and don’t feel like enabling the options we’ve outlined in this guide, take a moment to tune up your account’s privacy with a strong password and two-factor sign-in, and remove any unnecessary info from your profile. These extra steps will help keep you safe if there’s ever an unexpected security breach.

Source: How to Limit Spotify From Tracking You, Because It Knows Too Much

Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely

[…]

an AI on your phone will scan all those you have sent and will send to iPhotos. It will generate fingerprints that purportedly identify pictures, even if highly modified, that will be checked against fingerprints of known CSAM material. Too many of these – there’s a threshold – and Apple’s systems will let Apple staff investigate. They won’t get the pictures, but rather a voucher containing a version of the picture. But that’s not the picture, OK? If it all looks too dodgy, Apple will inform the authorities

[…]

In a blog post “Recognizing People in Photos Through Private On-Device Machine Learning” last month, Apple plumped itself up and strutted its funky stuff on how good its new person recognition process is. Obscured, oddly lit, accessorised, madly angled and other bizarrely presented faces are no problemo, squire.

By dint of extreme cleverness and lots of on-chip AI, Apple says it can efficiently recognise everyone in a gallery of photos. It even has a Hawkings-grade equation, just to show how serious it is, as proof that “finally, we rescale the obtained features by ss and use it as logit to compute the softmax cross-entropy loss based on the equation below.” Go, look. It’s awfully science-y.

The post is 3,500 words long, complex, and a very detailed paper on computer vision, one of the two tags Apple has given it. The other tag, Privacy, can be entirely summarised in six words: it’s on-device, therefore it’s private. No equation.

That would be more comforting if Apple hadn’t said days later how on-device analysis is going to be a key component in informing law enforcement agencies about things they disapprove of. Put the two together, and there’s a whole new and much darker angle to the fact, sold as a major consumer benefit, that Apple has been cramming in as much AI as it can so it can look at pictures as you take and after you’ve stored them.

We’ve all been worried about how mobile phones are stuffed with sensors that can watch what we watch, hear what we hear, track where we go and note what we do. The evolving world of personal data privacy is based around these not being stored in the vast vaults of big data, keeping them from being grist to the mill of manipulating our digital personas.

But what happens if the phone itself grinds that corn? It may never share a single photograph without your permission, but what if it can look at that photograph and generate precise metadata about what, who, how, when, and where it depicts?

This is an aspect of edge computing that is ahead of the regulators, even those of the EU who want to heavily control things like facial recognition. By the time any such regulation is produced, countless millions of devices will be using it to ostensibly provide safe, private, friendly on-device services that make taking and keeping photographs so much more convenient and fun.

It’s going to be very hard to turn that off, and very easy to argue for exemptions that weaken the regs to the point of pointlessness. Especially if the police and security services lobby hard as well, which they will as soon as they realise that this defeats end-to-end encryption without even touching end-to-end encryption.

So yes, Apple’s anti-CSAM model is capable of being used without impacting the privacy of the innocent, if it is run exactly as version 1.0 is described. It is also capable of working with the advances elsewhere in technology to break that privacy utterly, without setting off the tripwires of personal protection we’re putting in place right now.

[…]

Source: Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely • The Register

Rockstar Begins A War On Modders For ‘GTA’ Games For Totally Unclear Reasons

[…]Rockstar Games has previously had its own run-in with its modding community, banning modders who attempted to shift GTA5’s online gameplay to dedicated servers that would allow mods to be used, since Rockstar’s servers don’t allow mods. What it’s now doing in issuing copyright notices on modders who have been forklifting older Rockstar assets into newer GTA games, however, is totally different.

Grand Theft Auto publisher Take-Two has issued copyright takedown notices for several mods on LibertyCity.net, according to a post from the site. The mods either inserted content from older Rockstar games into newer ones, or combined content from similar Rockstar games into one larger game. The mods included material from Grand Theft Auto 3, San Andreas, Vice City, Mahunt, and Bully.

This has been a legally active year for Take-Two, starting with takedown notices for reverse-engineered versions of GTA3 and Vice City. Those projects were later restored. Since then, Take-Two has issued takedowns for mods that move content from older Grand Theft Auto games into GTA5, as well as mods that combine older games from the GTA3 generation into one. That lead to a group of modders preemptively taking down their 14-year-old mod for San Andreas in case they were next on Take-Two’s list.

All of this is partially notable because it’s new. Like many games released for the PC, the GTA series has enjoyed a healthy modding community. And Rockstar, previously, has largely left this modding community alone. Which is generally smart, as mods such as the ones the community produces are fantastic ways to both keep a game fresh as it ages and lure in new players to the original game by enticing them with mods that meet their particular interests. I’ll never forget a Doom mod that replaced all of the original MIDI soundtrack files with MIDI versions of 90’s alternative grunge music. That mod caused me to play Doom all over again from start to finish.

But now Rockstar Games has flipped the script and is busily taking these fan mods down. Why? Well, no one is certain, but likely for the most obvious reason of all.

One reason a company might become more concerned with this kind of copyright infringement is that it’s planning to release a similar product and wants to be sure that its claim to the material can’t be challenged. It’s speculative at this point, but that tracks with the rumors we heard earlier this year that Take-Two is working on remakes of the PS2 Grand Theft Auto games.

In other words, Rockstar appears to be completely happy to reap all the benefits from the modding community right up until the moment it thinks it can make more money with re-releases, at which point the company cries “Copyright!” The company may well be within its rights to operate that way, but why in the world would the modding community ever work on Rockstar games again?

Source: Rockstar Begins A War On Modders For ‘GTA’ Games For Totally Unclear Reasons | Techdirt