The Linkielist

Linking ideas with the world

The Linkielist

MEPs support curbing police use of facial recognition, border biometric data trawling drastically

Police should be banned from using blanket facial-recognition surveillance to identify people not suspected of crimes. Certain private databases of people’s faces for identification systems ought to be outlawed, too.

That’s the feeling of the majority of members in the European Parliament this week. In a vote on Wednesday, 377 MEPs backed a resolution restricting law enforcement’s use of facial recognition, 248 voted against, and 62 abstained.

“AI-based identification systems already misidentify minority ethnic groups, LGBTI people, seniors and women at higher rates, which is particularly concerning in the context of law enforcement and the judiciary,” reads a statement from the parliament.

“To ensure that fundamental rights are upheld when using these technologies, algorithms should be transparent, traceable and sufficiently documented, MEPs ask. Where possible, public authorities should use open-source software in order to be more transparent.”

As well as this, most of the representatives believe facial-recognition tech should not be used by the police in automatic mass surveillance of people in public, and monitoring should be restricted to only those thought to have broken the law. Datasets amassed by private companies, such as Clearview AI, for identifying citizens should also be prohibited along with systems that allow cops to predict crime from people’s behavior and backgrounds.

[…]

The vote is non-biding, meaning it cannot directly lead to any legislative change. Instead, it was cast to reveal if members might be supportive of upcoming bills like the AI Act, a spokesperson for the EU parliament told The Register.

“The resolution is a non-exhaustive list of AI uses that MEPs within the home affairs field find problematic. They ask for a moratorium on deploying new facial recognition systems for law enforcement, and a ban on the narrower category of private facial recognition databases,” the spokesperson added.

It also called for border control systems to stop using biometric data to track travelers across the EU, too.

Source: MEPs support curbing police use of facial recognition • The Register

Search providers compaining that EU Google antitrust measures didn’t achieve anything

Four search providers – DuckDuckGo, Ecosia, Qwant, and Lilo – have penned an open letter to the European Commission claiming that Google is suppressing search engine competition.

The EU has made a number of efforts to counter Google’s search monopoly, including a July 2018 fine and ruling that the company engaged in “illegal tying of Google’s search and browser apps” and “illegal payments conditional on exclusive pre-installation of Google Search.”

Google responded with some licensing changes. In August 2019, it agreed with the EU to provide an Android Choice screen, which included selling spots on the new menu via auction – leading to participants like privacy-centric DuckDuckGo complaining that they were priced out.

Google's new Android Choice screen

Google’s new Android Choice screen

The Android Choice screen has since been revised by further agreement with the European Commission, and now features more options and free participation. The new choice screen includes up to 12 search services, with the five most popular search engines in the local country listed first, as recorded by StatCounter, and is free for search providers.

Third-party search providers are not happy. Today’s open letter [PDF] states that “despite recent changes, we do not believe it will move market share significantly.” The providers say that the new Android Choice menu is “only shown once, in a Google-designed, Google-owned onboarding process. If [users] later decide to switch defaults, they must labour through 15+ clicks or factory-reset their phone.” They also complain that Chrome desktop and other operating systems are not included, and worry that “it doesn’t apply to all search aspects points in Android.”

[…]

“In the meantime, at least one search company went bankrupt. A German company called Cliqz invested €100m into building their own search algorithm and they went bankrupt. Google playing on time is a big problem.”

Cliqz said in its farewell post last year: “We failed to convince the political stakeholders, that Europe desperately needs an own independent digital infrastructure. Here we can only hope that someone else picks up the ball… the world needs a private search engine that is not just using Bing or Google in the backend.”

In Russia, Kroll said: “Yandex went down to a 20 per cent market share. Then they had a real choice screen on a fixed date and it went back to 60 per cent. I’m not saying we should do everything like Russia does, but it shows that it can have an effect.”

[…]

Source: Existence of Bing ‘essential’ to non-Google search engines • The Register

There’s a Murky Multibillion-Dollar Market for Your Phone’s Location Data

Companies that you likely have never heard of are hawking access to the location history on your mobile phone. An estimated $12 billion market, the location data industry has many players: collectors, aggregators, marketplaces, and location intelligence firms, all of which boast about the scale and precision of the data that they’ve amassed.

Location firm Near describes itself as “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Mobilewalla boasts “40+ Countries, 1.9B+ Devices, 50B Mobile Signals Daily, 5+ Years of Data.” X-Mode’s website claims its data covers “25%+ of the Adult U.S. population monthly.”

In an effort to shed light on this little-monitored industry, The Markup has identified 47 companies that harvest, sell, or trade in mobile phone location data. While hardly comprehensive, the list begins to paint a picture of the interconnected players that do everything from providing code to app developers to monetize user data to offering analytics from “1.9 billion devices” and access to datasets on hundreds of millions of people. Six companies claimed more than a billion devices in their data, and at least four claimed their data was the “most accurate” in the industry.

The Location Data Industry: Collectors, Buyers, Sellers, and Aggregators

The Markup identified 47 players in the location data industry

The logo of 1010Data

1010Data
The logo of Acxiom

Acxiom
The logo of AdSquare

AdSquare
The logo of ADVAN

ADVAN
The logo of Airsage

Airsage
The logo of Amass Insights

Amass Insights
The logo of Alqami

Alqami
The logo of Amazon AWS Data Exchange

Amazon AWS Data Exchange
The logo of Anomaly 6

Anomaly 6
The logo of Babel Street

Babel Street
The logo of Blis

Blis
The logo of Complementics

Complementics
The logo of Cuebiq

Cuebiq
The logo of Datarade

Datarade
The logo of Foursquare

Foursquare
The logo of Gimbal

Gimbal
The logo of Gravy Analytics

Gravy Analytics
The logo of GroundTruth

GroundTruth
The logo of Huq Industries

Huq Industries
The logo of InMarket / NinthDecimal

InMarket / NinthDecimal
The logo of Irys

Irys
The logo of Kochava Collective

Kochava Collective
The logo of Lifesight

Lifesight
The logo of Mobilewalla

Mobilewalla

“40+ Countries, 1.9B+ Devices, 50B Mobile Signals Daily, 5+ Years of Data”

The logo of Narrative

Narrative
The logo of Near

Near

“The World’s Largest Dataset of People’s Behavior in the Real-World”

The logo of Onemata

Onemata
The logo of Oracle

Oracle
The logo of Phunware

Phunware
The logo of PlaceIQ

PlaceIQ
The logo of Placer.ai

Placer.ai
The logo of Predicio

Predicio
The logo of Predik Data-Driven

Predik Data-Driven
The logo of Quadrant

Quadrant
The logo of QueXopa

QueXopa
The logo of Reveal Mobile

Reveal Mobile
The logo of SafeGraph

SafeGraph
The logo of Snowflake

Snowflake
The logo of start.io

start.io
The logo of Stirista

Stirista
The logo of Tamoco

Tamoco
The logo of THASOS

THASOS
The logo of Unacast

Unacast
The logo of Venntel

Venntel
The logo of Venpath

Venpath
The logo of Veraset

Veraset
The logo of X-Mode (Outlogic)

X-Mode (Outlogic)
Created by Joel Eastwood and Gabe Hongsdusit. Source: The Markup. (See our data, including extended company responses, here.)

“There isn’t a lot of transparency and there is a really, really complex shadowy web of interactions between these companies that’s hard to untangle,” Justin Sherman, a cyber policy fellow at the Duke Tech Policy Lab, said. “They operate on the fact that the general public and people in Washington and other regulatory centers aren’t paying attention to what they’re doing.”

Occasionally, stories illuminate just how invasive this industry can be. In 2020, Motherboard reported that X-Mode, a company that collects location data through apps, was collecting data from Muslim prayer apps and selling it to military contractors. The Wall Street Journal also reported in 2020 that Venntel, a location data provider, was selling location data to federal agencies for immigration enforcement.

A Catholic news outlet also used location data from a data vendor to out a priest who had frequented gay bars, though it’s still unknown what company sold that information.

Many firms promise that privacy is at the center of their businesses and that they’re careful to never sell information that can be traced back to a person. But researchers studying anonymized location data have shown just how misleading that claim can be.

[…]

Most times, the location data pipeline starts off in your hands, when an app sends a notification asking for permission to access your location data.

Apps have all kinds of reasons for using your location. Map apps need to know where you are in order to give you directions to where you’re going. A weather, waves, or wind app checks your location to give you relevant meteorological information. A video streaming app checks where you are to ensure you’re in a country where it’s licensed to stream certain shows.

But unbeknownst to most users, some of those apps sell or share location data about their users with companies that analyze the data and sell their insights, like Advan Research. Other companies, like Adsquare, buy or obtain location data from apps for the purpose of aggregating it with other data sources

[…]

Companies like Adsquare and Cuebiq told The Markup that they don’t publicly disclose what apps they get location data from to keep a competitive advantage but maintained that their process of obtaining location data was transparent and with clear consent from app users.

[…]

Yiannis Tsiounis, the CEO of the location analytics firm Advan Research, said his company buys from location data aggregators, who collect the data from thousands of apps—but would not say which ones.

[…]

Into the Location Data Marketplace 

Once a person’s location data has been collected from an app and it has entered the location data marketplace, it can be sold over and over again, from the data providers to an aggregator that resells data from multiple sources. It could end up in the hands of a “location intelligence” firm that uses the raw data to analyze foot traffic for retail shopping areas and the demographics associated with its visitors. Or with a hedge fund that wants insights on how many people are going to a certain store.

“There are the data aggregators that collect the data from multiple applications and sell in bulk. And then there are analytics companies which buy data either from aggregators or from applications and perform the analytics,” said Tsiounis of Advan Research. “And everybody sells to everybody else.”

Some data marketplaces are part of well-known companies, like Amazon’s AWS Data Exchange, or Oracle’s Data Marketplace, which sell all types of data, not just location data.

[…]

companies, like Narrative, say they are simply connecting data buyers and sellers by providing a platform. Narrative’s website, for instance, lists location data providers like SafeGraph and Complementics among its 17 providers with more than two billion mobile advertising IDs to buy from

[…]

To give a sense of how massive the industry is, Amass Insights has 320 location data providers listed on its directory, Jordan Hauer, the company’s CEO, said. While the company doesn’t directly collect or sell any of the data, hedge funds will pay it to guide them through the myriad of location data companies, he said.

[…]

Oh, the Places Your Data Will Go

There are a whole slew of potential buyers for location data: investors looking for intel on market trends or what their competitors are up to, political campaigns, stores keeping tabs on customers, and law enforcement agencies, among others.

Data from location intelligence firm Thasos Group has been used to measure the number of workers pulling extra shifts at Tesla plants. Political campaigns on both sides of the aisle have also used location data from people who were at rallies for targeted advertising.

Fast food restaurants and other businesses have been known to buy location data for advertising purposes down to a person’s steps. For example, in 2018, Burger King ran a promotion in which, if a customer’s phone was within 600 feet of a McDonalds, the Burger King app would let the user buy a Whopper for one cent.

The Wall Street Journal and Motherboard have also written extensively about how federal agencies including the Internal Revenue Service, Customs and Border Protection, and the U.S. military bought location data from companies tracking phones.

[…]

Outlogic (formerly known as X-Mode) offers a license for a location dataset titled “Cyber Security Location data” on Datarade for $240,000 per year. The listing says “Outlogic’s accurate and granular location data is collected directly from a mobile device’s GPS.”

At the moment, there are few if any rules limiting who can buy your data.

Sherman, of the Duke Tech Policy Lab, published a report in August finding that data brokers were advertising location information on people based on their political beliefs, as well as data on U.S. government employees and military personnel.

“There is virtually nothing in U.S. law preventing an American company from selling data on two million service members, let’s say, to some Russian company that’s just a front for the Russian government,” Sherman said.

Existing privacy laws in the U.S., like California’s Consumer Privacy Act, do not limit who can purchase data, though California residents can request that their data not be “sold”—which can be a tricky definition. Instead, the law focuses on allowing people to opt out of sharing their location in the first place.

[…]

“We know in practice that consumers don’t take action,” he said. “It’s incredibly taxing to opt out of hundreds of data brokers you’ve never even heard of.”

[…]

 

Source: There’s a Multibillion-Dollar Market for Your Phone’s Location Data – The Markup

Clearview AI Says It Can Do the ‘Computer Enhance’ Thing – wait, this evil has not yet been purged?

Sketchy face recognition company Clearview AI has inflated its stockpile of scraped images to over 10 billion, according to its co-founder and CEO Hoan Ton-That. What’s more, he says the company has new tricks up its sleeve, like using AI to draw in the details of blurry or partial images of faces.

Clearview AI has reportedly landed contracts with over 3,000 police and government customers including 11 federal agencies, which it says use the technology to identify suspects when it might otherwise be impossible. In April, a BuzzFeed report citing a confidential source identified over 1,800 public agencies that had tested or currently uses its products, including everything from police and district attorney’s offices to Immigration and Customs Enforcement and the U.S. Air Force. It also reportedly has worked with dozens of private companies including Walmart, Best Buy, Albertsons, Rite Aid, Macy’s, Kohl’s, AT&T, Verizon, T-Mobile, and the NBA.

Clearview has landed such deals despite facing considerable legal trouble over its unauthorized acquisition of those billions of photos, including state and federal lawsuits claiming violations of biometrics privacy laws, a consumer protection suit brought by the state of Vermont, the company’s forced exit from Canada, and complaints to privacy regulators in at least five other countries. There have also been reports detailing Ton-That’s historic ties to far-right extremists (which he denies) and pushback against the use of face recognition by police in general, which has led to bans on such use in over a dozen U.S. cities.

In an interview with Wired on Monday, Ton-That claimed that Clearview has now scraped over 10 billion images from the open web for use in its face recognition database. According to the CEO, the company is also rolling out a number of machine learning features, including one that uses AI to reconstruct faces that are obscured by masks.

Specifically, Ton-That told Wired that Clearview is working on “deblur” and “mask removal” tools. The first feature should be familiar to anyone who’s ever used an AI-powered image upscaling tool, taking a lower-quality image and using machine learning to add extra details. The mask removal feature uses statistical patterns found in other images to guess what a person might look like under a mask. In both cases, Clearview would essentially be offering informed guesswork. I mean, what could go wrong?

As Wired noted, quite a lot. There’s a very real difference between using AI to upscale Mario’s face in Super Mario 64 and using it to just sort of suggest what a suspect’s face might look like to cops. For example, existing face recognition tools have been repeatedly assessed as riddled with racial, gender, and other biases, and police have reported extremely high failure rates in its use in criminal investigations. That’s before adding in the element of the software not even knowing what a face really looks like—it’s hard not to imagine such a feature being used as a pretext by cops to fast-track investigative leads.

[…]

“… My intention with this technology is always to have it under human control. When AI gets it wrong it is checked by a person.” After all, it’s not like police have a long and storied history of using junk science to justify misconduct or prop up arrests based on flimsy evidence and casework, which often goes unquestioned by courts.

Ton-That is, of course, not that naive to think that police won’t use these kinds of capabilities for purposes like profiling or padding out evidence. Again, Clearview’s backstory is full of unsettling ties to right-wing extremists—like the reactionary troll and accused Holocaust denier Chuck C. Johnson—and Ton-That’s track record is full of incidents where it looks an awful lot like he’s exaggerating capabilities or deliberately stoking controversy as a marketing tool. Clearview itself is fully aware of the possibilities for questionable use by police, which is why the company’s marketing once advertised that cops could “run wild” with their tools and the company later claimed to be building accountability and anti-abuse features after getting its hooks into our justice system.

Source: Clearview AI Says It Can Do the ‘Computer Enhance’ Thing

9 Horrifying Facts From the Facebook Whistleblower Interview

Last week, the Wall Street Journal published internal research from Facebook showing that the social media company knows precisely how toxic its own product is for the people who use it. But tonight, we learned how the Journal obtained those documents: A whistleblower named Frances Haugen, who spoke with CBS News’ 60 Minutes about the ways Facebook is poisoning society.

The 37-year-old whistleblower liberated “tens of thousands” of pages of documents from Facebook and even plans to testify to Congress at some point this week. Haugen has filed at least eight complaints with the SEC alleging that Facebook has lied to shareholders about its own product.

Fundamentally, Haugen alleges there’s a key conflict between what’s good for Facebook and what’s good for society at large. At the end of the day, things that are good for Facebook tend to be bad for the world we live in, according to Haugen. We’ve pulled out some of the most interesting tidbits from Sunday’s interview that highlight this central point.

1) Facebook’s algorithm intentionally shows users things to make them angry

Haugen explained to 60 Minutes how Facebook’s algorithm chooses content that’s likely to make users angry because that causes the most engagement. And user engagement is what Facebook turns into ad dollars.

[…]

2) Facebook is worse than most other social media companies

[…]

Haugen previously worked at Pinterest and Google, and insists that Facebook really is worse than the rest of Big Tech in substantial ways.

3) Facebook dissolved its Civic Integrity unit after the 2020 election and before the Jan. 6 Capitol insurrection

Haugen worked at the so-called Civic Integrity unit of Facebook, in charge of combating political misinformation on the platform. But the social media company seemed to think they were in the clear after the U.S. presidential election in November 2020 and that Civic Integrity could be shut down.

[…]

4) Political parties in Europe ran negative ads because it was the only way to reach people on Facebook

[…]

Summarizing the position of political parties in Europe, Haugen explained, “You are forcing us to take positions that we don’t like, that we know are bad for society. We know if we don’t take those positions, we won’t win in the marketplace of social media.”

5) Facebook only identifies a tiny fraction of hate and misinformation on the platform

Facebook’s internal research shows that it identifies roughly; 3-5% of hate on the platform and less than 1% of violence and incitement, according to one of the studies leaked by Haugen.

[…]

6) Instagram is making kids miserable

Facebook owns Intagram, and as 60 Minutes points out, the documents leaked by Haugen show that 13.5% of teen girls say Instagram makes thoughts of sucide worse, and 17% say it makes their eating disorders worse.

[…]

7) Employees at Facebook aren’t necessarily evil, they just have perverse incentives

Haugen says that the people who work at Facebook aren’t bad people, which seems like the kind of thing someone who previously worked at Facebook might say.

[…]

8) Haugen even has empathy for Zuck

[…]

9) Haugen believes she’s covered by whistleblower laws, but we’ll see

[…]

while Dodd-Frank hypothetically protects employees talking with the SEC, it doesn’t necessarily protect people talking with journalists and taking thousands of pages of documents. But we’re going to find out pretty quickly just how much protection whistleblowers actually get in the U.S. Historically, let’s just say the answer has been “not much.”

 

Source: 9 Horrifying Facts From the Facebook Whistleblower Interview

Google (G00G) Urges EU Judges to Slash ‘Staggering’ $5 Billion Fine

Google called on European Union judges to cut or cancel a “staggering” 4.3 billion euro ($5 billion) antitrust fine because the search giant never intended to harm rivals.

The company “could not have known its conduct was an abuse” when it struck contracts with Android mobile phone makers that required them to take its search and web-browser apps, Google lawyer Genevra Forwood told the EU’s General Court in Luxembourg.

[…]

The European Commission’s lawyer, Anthony Dawes, scoffed at Google’s plea, saying the fine was a mere 4.5% of the company’s revenue in 2017, well below a 10% cap.

[…]

Source: Google (G00G) Urges EU Judges to Slash ‘Staggering’ $5 Billion Fine – Bloomberg

Because Google had never ever heard of Microsoft and the antitrust lawsuits around Internet Explorer? Come on!

Lawsuit prepped against Google for using Brit patients’ data

A UK law firm is bringing legal action on behalf of patients it says had their confidential medical records obtained by Google and DeepMind Technologies in breach of data protection laws.

Mishcon de Reya said today it planned a representative action on behalf of Mr Andrew Prismall and the approximately 1.6 million individuals whose data was used as part of a testing programme for medical software developed by the companies.

It told The Register the claim had already been issued in the High Court.

DeepMind, acquired by Google in 2014, worked with the search software giant and Royal Free London NHS Foundation Trust under an arrangement formed in 2015.

The law firm said that the tech companies obtained approximately 1.6 million individuals’ confidential medical records without their knowledge or consent.

The Register has contacted Google, DeepMind and the Royal Free Hospital for their comments.

“Given the very positive experience of the NHS that I have always had during my various treatments, I was greatly concerned to find that a tech giant had ended up with my confidential medical records,” lead claimant Prismall said in a statement.

“As a patient having any sort of medical treatment, the last thing you would expect is your private medical records to be in the hands of one of the world’s biggest technology companies.

[…]

In April 2016, it was revealed that the web giant had signed a deal with the Royal Free Hospital in London to build an application called Streams, which can analyse patients’ details and identify those who have acute kidney damage. The app uses a fixed algorithm, developed with the help of doctors, so not technically AI.

The software – developed by DeepMind, Google’s AI subsidiary – was first tested with simulated data. But it was tested again using 1.6 million sets of real NHS medical files provided by the London hospital. However, not every patient was aware that their data was being given to Google to test the Streams software. Streams had been deployed inwards, and thus now handles real people’s details, but during development, it also used live medical records as well as simulated inputs.

Dame Caldicott told the hospital’s medical director, Professor Stephen Powis, that he overstepped the mark, and that there was no consent given by people to have their information used in this way pre-deployment.

[…]

In a data-sharing agreement uncovered by the New Scientist, Google and its DeepMind artificial intelligence wing were granted access to current and historic patient data at three London hospitals run by the Royal Free NHS Trust.

Source: Lawsuit prepped against Google for using Brit patients’ data • The Register

Leaked Documents Show How Amazon’s Astro Robot Tracks Everything You Do – incompetently

Amazon’s new robot called Astro is designed to track the behavior of everyone in your home to help it perform its surveillance and helper duties, according to leaked internal development documents and video recordings of Astro software development meetings obtained by Motherboard. The system’s person recognition system is heavily flawed, according to two sources who worked on the project.

The documents, which largely use Astro’s internal codename “Vesta” for the device, give extensive insight into the robot’s design, Amazon’s philosophy, how the device tracks customer behavior as well as flow charts of how it determines who a “stranger” is and whether it should take any sort of “investigation activity” against them.

[…]

The meeting document spells out the process in a much blunter way than Amazon’s cutesy marketing suggests.

“Vesta slowly and intelligently patrols the home when unfamiliar person are around, moving from scan point to scan point (the best location and pose in any given space to look around) looking and listening for unusual activity,” one of the files reads. “Vesta moves to a predetermined scan point and pose to scan any given room, looking past and over obstacles in its way. Vesta completes one complete patrol when it completes scanning all the scan point on the floorplan.”

[…]

“Sentry is required to investigate any unrecognized person detected by it or Audio Event in certain set of conditions are met,” one file reads. “Sentry should first try to identify the person if they are not still unrecognized for as long as 30s [seconds]. When the person is identified as unknown or 30s passed, Sentry should start following the person until Sentry Mode is turned off.”

Screen Shot 2021-09-28 at 2.33.19 PM.png

A flow chart presented during the meeting explains exactly what happens when Astro detects a “presence” and how it is designed for “investigating strangers.” If a user has disabled “stranger investigation,” the robot will ignore a stranger. If it’s set to “Sentry mode” or a patrol mode, it will either approach the stranger or follow them, and begin a series of “investigation activities,” which Amazon describes as “a series of actions Sentry takes to investigate audio or presence while recording.” Generally, if Astro begins an investigation, it will follow the stranger, record audio and video of them, and then automatically upload a recording the user can view later.

[…]

Developers who worked on Astro say the versions of the robot they worked on did not work well.

“Astro is terrible and will almost certainly throw itself down a flight of stairs if presented the opportunity. The person detection is unreliable at best, making the in-home security proposition laughable,” a source who worked on the project said. “The device feels fragile for something with an absurd cost. The mast has broken on several devices, locking itself in the extended or retracted position, and there’s no way to ship it to Amazon when that happens.”

[…]

Another source who worked on the project mentioned privacy and navigation as chief concerns. “As for my personal opinions on the device, it’s a disaster that’s not ready for release,” they said. “They break themselves and will almost certainly fall down stairs in real world users’ homes. In addition it’s also (in my opinion) a privacy nightmare that is an indictment of our society and how we trade privacy for convenience with devices like Vesta.”

The source also corroborated that Astro’s facial recognition abilities perform poorly, which is concerning for a device designed mainly to follow people around and determine if they’re a stranger or not.

[…]

“The goal is to make Vesta an ‘intelligent robot,’ and allow some simple but magical interactions with people,” the social robotics document states. To do this, Astro needs to fully map a user’s home, creating a heat map of “choke points” and highly trafficked areas where the robot is likely to get stuck or “places where it will easily get hit by humans” such as hallways, doorways, and the kitchen.

Screen Shot 2021-09-28 at 2.58.52 PM.png

A map of a user’s home, with “choke points” in red

Astro is supposed to learn over time, meaning that it must track what humans are doing, where they are going, and where they are likely to congregate.

[…]

 

Source: Leaked Documents Show How Amazon’s Astro Robot Tracks Everything You Do

Bing Search Results Erases Images Of ‘Tank Man’ On Anniversary Of Tiananmen Square Crackdown (2021)

On the 32nd anniversary of the Tiananmen Square protests, internet users noticed Microsoft’s Bing search engine was producing some interesting results. Or, rather, it wasn’t producing expected search results for some possibly interesting reasons.

Users searching for the most iconic image of the protests — that of the unidentified person known only as “Tank Man” — were coming up empty. It appeared that Microsoft’s search engine was blocking results for an image that often serves as shorthand for rebellion against the Chinese government.

As was reported by several web users, followed by several news outlets, the apparent blocking of search results could be observed in both the United States and the United Kingdom, leaving users with the impression the Chinese government had pressured Microsoft to moderate search results for “tank man” in hopes of reducing any remembrance of the Tiananmen Square Massacre, which resulted in the deaths of 2,500-3,500 protesters.

The apparent censorship was blamed on Microsoft’s close relationship with the Chinese government, which allowed its search engine to be accessed by Chinese residents in exchange for complying with government censorship requests.

This led to Microsoft being criticized by prominent politicians for apparently allowing the Chinese government to dictate what users around the world could access in relation to the Tiananmen Square protests.

[…]

Shortly after the apparent censorship of the iconic “Tank Man” image was reported, Microsoft claimed the very timely removal of relevant search results was the byproduct of “accidental human error.”

However, the company refused to offer any additional explanation. And, while searching the term “Tank Man” produced search results in Bing, it did not generate the expected results.

Image via The Verge

Several hours after the first “fix,” things returned to normal, with “Tank Man” searches bringing up the actual Tank Man, rather than just tanks or tanks with men near or on the tanks.

Image via Twitter user Steven F

More clarification and comment was sought, but Microsoft apparently had nothing more to say about this “human error” and its conspicuous timing. Nor did it offer any details on whether or not this “human error” originated with its Beijing team. It also didn’t explain why the first fix resulted in images very few people would associate with the term “Tank Man.”

Source: Content Moderation Case Study: Bing Search Results Erases Images Of ‘Tank Man’ On Anniversary Of Tiananmen Square Crackdown (2021) | Techdirt

Marvel Files Lawsuit to Keep Iron Man, Spider-Man Rights From Creators

The families of iconic Marvel comic book writers and artists Stan Lee, Steve Ditko, Don Heck, Gene Colan, and Don Rico have filed termination of copyright notices on the superheroes they helped create. Marvel—which Disney has owned since 2009—unsurprisingly, disagrees and has filed lawsuits against all five to keep the characters in the Marvel stable and making the company billions.

The Hollywood Reporter broke the news. Without trying to get into too much legalese, creators can file termination of copyright notices to reclaim rights to their work after a set amount of time, with a minimum of 35 years. Marvel’s suits argue that the characters are ineligible for copyright termination because they were made as “work-for-hire”—as in Marvel paid people to create characters for the company, meaning the company owns them outright. According to the report, if the creators’ heirs notices were accepted, Marvel would lose rights to characters including Iron Man, Spider-Man, Hawkeye, Black Widow, Doctor Strange, Falcon, Ant-Man, and more. One caveat is this only matters in the United States. According to THR, even if Marvel loses, Disney can continue making money off the characters everywhere else. If the heirs win, Disney would still share ownership.

Since Marvel has pro-actively sued to keep the copyrights to these characters, I suppose the creators’ claims have some validity to them, but as a layman, the case looks hopeless to me. Not only does the Walt Disney Company have the infinite cash reserves to keep the rights tied up with them for years, but there have been previous cases where Marvel creators have claimed ownership and had to settle. Additionally, the lawyer representing the heirs is Marc Toberoff, who also represented the families of Superman creators Joe Shuster and Jerry Siegel when they tried to terminate DC Comics’ rights to the Man of Steel. DC was successfully represented by Dan Petrocelli—and he’s the one who just filed the lawsuits for Marvel.

More likely, the case will ultimately be about paying people some kind of fair compensation for turning Marvel into a billion-dollar company, which Disney has no desire to do (remember, Disney’s reportedly been paying creators a mere $5,000 for work it’s made those billions on). This is unfair, immoral, and purely greedy; the company has more than enough money to make all of these creators rich without coming close to losing a profit. In the best-case scenario, Disney/Marvel will give these folks as little as possible to make these legal annoyances go away early. It won’t be nearly as much as the company could and should give them, but at least it’ll be something.

Source: Marvel Files Lawsuit to Keep Iron Man, Spider-Man Rights From Creators

Even the fact that there is copyright on these characters still after the original creators have died is downright ridiculous

Apple’s App Tracking Transparency Feature Doesn’t Stop Tracking

In 2014, some very pervy creeps stole some very personal iCloud photos from some very high-profile celebs and put them on the open web, creating one very specific PR crisis for Apple’s CEO, Tim Cook. The company was about to roll out Apple Pay as part of its latest software update, a process that took more than a decade bringing high-profile payment processors and retailers on board. The only issue was that nobody seemed to want their credit card details in the hands of the same company whose service had been used to steal dozens of nude photos of Jennifer Lawrence just a week earlier.

Apple desperately needed a rebrand, and that’s exactly what we got. Within days, the company rolled out a polished promotional campaign—complete with a brand new website and an open letter from Cook himself—explaining the company’s beefed-up privacy prowess, and the safeguards adopted in the wake of that leak. Apple wasn’t only a company you could trust, Cook said, it was arguably the company—unlike the other guys (*cough* Facebook *cough*) who built their Silicon Valley empires off of pawning your data to marketing companies, Apple’s business model is built off of “selling great products,” no data-mining needed.

That ad campaign’s been playing out for the last seven years, and by all accounts, it’s worked. It’s worked well enough that in 2021, we trust Apple with our credit card info, our personal health information, and most of what’s inside our homes. And when Tim Cook decried things like the “data-industrial complex” in interviews earlier this year and then rolled out a slew of iOS updates meant to give users the power they deserved, we updated our iPhones and felt a tiny bit safer.

The App Tracking Transparency (ATT) settings that came bundled in an iOS 14 update gave iPhone users everywhere the power to tell their favorite apps (and Facebook) to knock off the whole tracking thing. Saying no, Apple promised, would stop these apps from tracking you as you browse the web, and through other apps on your phone. Well, it turns out that wasn’t quite the case. The Washington Post was first to report on a research study that put Apple’s ATT feature to the test, and found the setting… pretty much useless. As the researchers put it:

In our tests of ten top-ranked apps, we found no meaningful difference in third-party tracking activity when choosing App Tracking Transparency’s “Ask App Not To Track.” The number of active third-party trackers was identical regardless of a user’s ATT choice, and the number of tracking attempts was only slightly (~13%) lower when the user chose “Ask App Not To Track”.

So, what the hell happened? In short, ATT addresses one specific (and powerful) piece of digital data that advertisers use to identify your specific device—and your specific identity—across multiple sites and services: the so-called ID for Advertisers, or IDFA. Telling an app not to track severs their access to this identifier, which is why companies like Facebook lost their minds over these changes. Without the IDFA, Facebook had no way to know whether, say, an Instagram ad translated into a sale on some third-party platform, or whether you downloaded an app because of an ad you saw in your news feed.

Luckily for said companies (but unluckily for us), tracking doesn’t start and end with the IDFA. Fingerprinting—or cobbling together a bunch of disparate bits of mobile data to uniquely identify your device—has come up as a pretty popular alternative to some major digital ad companies, which eventually led Apple to tell them to knock that shit off. But because “fingerprinting” encompasses so many different kinds of data in so many different contexts (and can go by many different names), nobody knocked anything off. And outside of one or two banned apps, Apple really didn’t seem to care.

[…]

Some Apple critics in the marketing world have been raising red flags for months about potential antitrust issues with Apple’s ATT rollout, and it’s not hard to see why. It gave Apple exclusive access to a particularly powerful piece of intel on all of its customers, the IDFA, while leaving competing tech firms scrambling for whatever scraps of data they can find. If all of those scraps become Apple’s sole property, too, that’s practically begging for even more antitrust scrutiny to be thrown its way. What Apple seems to be doing here is what any of us would likely do in its situation: picking its battles.

Source: Apple’s App Tracking Transparency Feature Doesn’t Stop Tracking

Apple Confirms Fortnite Won’t Come Back to iPhones Anytime Soon

Today, Tim Sweeney confirmed on Twitter just how massive of an “L” Epic took in its recent trial against Apple. Apple has effectively “blacklisted” Fortnite from all Apple products until the legal clash between the two massive corporations reaches its conclusion, which could take as long as five years. (It’s even longer in Peely years.)

In the tweet, Sweeney posted a letter Epic had received from Apple confirming that Epic’s Apple developer account will not be reinstated, and that Epic cannot even request reinstatement until “the court’s judgement becomes final and unappealable.” That can take up to five years, according to Sweeney, who also claims that this is a renege on Apple’s previous position expressed to both the court and the press. However, given that Epic is currently trying to appeal the decision, I’d argue that Apple’s reticence to let it return to the platform makes perfect sense.

This letter reinforces the reality of this trial, that both Epic and Apple resoundingly lost. There was no court order to get Fortnite back on the store, and Apple lost its ability to refuse payments outside of its ecosystem. Both massive corporations lost, and all other developers will reap the rewards of Epic’s hubris.

[…]

 

Source: Apple Confirms Fortnite Won’t Come Back to iPhones Anytime Soon

I’m not sure Epic minds so much, considering Apples are only used by parents, but it sure shows how childish Apple is.

Lithuania tells citizens to throw Xiaomi mobiles away for censoring functionality

In an audit it published yesterday [PDF] the agency called out Xiaomi’s Mi 10T 5G phone handset firmware for being able to censor terms such as “Free Tibet”, “Long live Taiwan independence” or “democracy movement”.

Defence Deputy Minister Margiris Abukevicius told reporters at the audit’s release: “Our recommendation is to not buy new Chinese phones, and to get rid of those already purchased as fast as reasonably possible.”

Although the censorship setting was disabled for phones sold into the manufacturer’s “European region”, the Lithuanian NCSC said (page 22):

It has been established that during the initialisation of the system applications factory-installed on a Xiaomi Mi 10T device, these applications contact a server in Singapore at the address globalapi.ad.xiaomi.com (IP address 47.241.69.153) and download the JSON file MiAdBlacklistConfig, and save this file in the metadata catalogues of the applications.

That file contained a list of more than 400 terms, including “free Tibet”, “89 Democracy Movement” (a reference to Tiananmen Square) and “long live Taiwan’s independence”.

The local security agency’s 32-page report, titled “Assessment of cybersecurity of mobile devices supporting 5G technology sold in Lithuania”, focused on devices from Xiaomi, Huawei and OnePlus.

“It is believed that this functionality allows a Xiaomi device to perform an analysis of the target multimedia content entering the phone; to search for keywords based on the MiAdBlacklist list received from the server,” said the Lithuanian report.

“Once the device determines that the content contains certain keywords, the device performs filtering of this content and the user cannot see it. The principle of data analysis allows analysis not only of words written in letters; the list that is regularly downloaded from the server can be formed in any language.”

The agency said the censorship could be remotely re-enabled at any time by Xiaomi.

Source: Lithuania tells citizens to throw Xiaomi mobiles away • The Register

UK appeals court rules AI cannot be listed as a patent inventor

Add the United Kingdom to the list of countries that says an artificial intelligence can’t be legally credited as an inventor. Per the BBC, the UK Court of Appeal recently ruled against Dr. Stephen Thaler in a case involving the country’s Intellectual Property Office. In 2018, Thaler filed two patent applications in which he didn’t list himself as the creator of the inventions mentioned in the documents. Instead, he put down his AI DABUS and said the patent should go to him “by ownership of the creativity machine.”

The Intellectual Property Office told Thaler he had to list a real person on the application. When he didn’t do that, the agency decided he had withdrawn from the process. Thaler took the case to the UK’s High Court. The body ruled against him, leading to the eventual appeal. “Only a person can have rights. A machine cannot,” Lady Justice Elisabeth Laing of the Appeal Court wrote in her judgment. “A patent is a statutory right and it can only be granted to a person.”

Thaler has filed similar legal challenges in other countries, and the results so far have been mixed. In August, a judge in Australia ruled inventions created by an AI can qualify for a patent. However, only earlier this month, US District Judge Leonie M Brinkema upheld a decision by the US Patent and Trademark Office that said “only natural persons may be named as an inventor in a patent application.” Judge Brinkema said there may eventually be a time when AI becomes sophisticated enough to satisfy the accepted definitions of inventorship, but noted, “that time has not yet arrived, and, if it does, it will be up to Congress to decide how, if at all, it wants to expand the scope of patent law.”

Source: UK appeals court rules AI cannot be listed as a patent inventor | Engadget

This is strange as Patents can be granted to companies – which are legally people, but not really, well, people

Ig Nobel Prizes blocked by YouTube takedown over 1914 song snippet – can’t find human to fix the error

YouTube, the Ig Nobel Prizes, and the Year 1914

YouTube’s notorious takedown algorithms are blocking the video of the 2021 Ig Nobel Prize ceremony.

We have so far been unable to find a human at YouTube who can fix that. We recommend that you watch the identical recording on Vimeo.

The Fatal Song

This is a photo of John McCormack, who sang the song “Funiculi, Funicula” in the year 1914, inducing YouTube to block the 2021 Ig Nobel Prize ceremony.

Here’s what triggered this: The ceremony includes bits of a recording (of tenor John McCormack singing “Funiculi, Funicula”) made in the year 1914.

The Corporate Takedown

YouTube’s takedown algorithm claims that the following corporations all own the copyright to that audio recording that was MADE IN THE YEAR 1914: “SME, INgrooves (on behalf of Emerald); Wise Music Group, BMG Rights Management (US), LLC, UMPG Publishing, PEDL, Kobalt Music Publishing, Warner Chappell, Sony ATV Publishing, and 1 Music Rights Societies”

UPDATES: (Sept 19, 2021) There’s an ongoing discussion on Slashdot.(Sept 13, 2021) There’s an ongoing discussion on Hacker News, about this problem.

Source: Improbable Research » Blog Archive

First of all, what is copyright doing protecting anything from 1914? The creator is more than dead and buried and the model of creating once and keeping raking in money is ridiculous anyway.
Second, this shows the power the large copyright holders hold over smaller players – and the Ig Nobel Prizes aren’t exactly a small player! If a big corporation throws a DMCA at you, there’s nothing you can do – you are caught in a Kafka-esque hole with no hope in sight.

Australia gave police power to compel sysadmins into assisting account takeovers – so they plan to use it

Australia’s Federal Police force on Sunday announced it intends to start using new powers designed to help combat criminal use of encryption by taking over the accounts of some social media users, then deleting or modifying content they’ve posted.

The law also requires sysadmins to help those account takeovers.

The force (AFP) stated its intentions in light of the late August passage of the Surveillance Legislation Amendment (Identify and Disrupt) Bill 2021, which was first mooted in December 2020. While the Bill was subject to consultation, few suggestions were incorporated and in August the Bill sped through Australia’s Parliament after two days of superficial debate with many suggested amendments ignored.

As detailed in its explanatory memorandum, the Bill was aimed squarely at helping investigators to act against users of encrypted services.

[..]

Yes, dear reader, if granted those warrants mean the AFP and ACIC can take over an account and delete or modify content created by the accountholder. And if they can’t do that themselves, sysadmins are required to assist.

[…]

Another scenario of concern is “forum shopping” whereby investigators could be denied access to use of one law by a judge, so turn to another judge and try a different law that delivers essentially the same outcome.

The AFP seems not to be bothered by the debate: its announcements stated it will “be relentless in using the law and its powers to remove child sex abuse material and unlawful content from the dark web and other forums”

Source: Australia gave police power to compel sysadmins into assisting account takeovers – so they plan to use it • The Register

Well as soon as you hear kiddie porn you know it’s going to be used for much much more than against kiddie porn. Who can argue against kiddie porn, right?

Singapore snitchbots into the streets to detect “undesirable social behaviours”

Singapore’s Home Team Science and Technology Agency (HTX) roving robot has hit the streets of Toa Payoh Central as part of a trial to support public officers in enhancing public health and safety.

The robot, named Xavier, was jointly developed by HTX and the Agency for Science, Technology and Research. It is fitted with sensors for autonomous navigation, a 360-degree video feed to the command and control centre, real-time sensing and analysis, and an interactive dashboard where public officers can receive real-time information from and be able to monitor and control multiple robots simultaneously.

[…]

Over a three-week trial period, Xavier will detect “undesirable social behaviours” including smoking in prohibited areas, illegal hawking, improperly parked bicycles, congregation of more than five people in line with existing social distancing measures, and motorised active mobility devices and motorcycles on footpaths.

If one of those behaviours are detected, Xavier will trigger real-time alerts to the command and control centre, and display appropriate messages to educate the public and deter such behaviours.

[…]

Source: Singapore sends Xavier the robot to help police keep streets safe under three-week trial | ZDNet

Apple wins some and loses some in big Epic Games lawsuit – judge must have been on acid

On the eve of the iPhone 13 launch, we’ve finally been handed a ruling in the lawsuit filed by Epic Games last year. Epic Games, the developer of Fortnite, sued Apple last year over claims the company was violating U.S. antitrust law by prohibiting developers from implementing alternative in-app purchase methods. Today, Judge Yvonne Gonzalez-Rogers issued her ruling in the Epic Games v. Apple lawsuit, handing app developers a major win in the fight for app payment freedom.

As part of her ruling, Judge Gonzalez-Rogers issued a permanent injunction against Apple that orders the company to lift its restrictions on iOS apps and App Store pages providing buttons, external links, and other “calls to action” that direct consumers to other purchasing mechanisms. The injunction essentially orders Apple to abandon its anti-steering policy, which prohibited app developers from informing users of alternative purchasing methods.

[…]

Apple wins on all but one important claim

Last year, Epic Games intentionally circumvented Apple’s App Store policy by introducing direct payments for in-app purchases in Fortnite. Immediately after, Apple pulled Fortnite from the App Store and suspended Epic’s developer account, citing a violation of the App Store guidelines regarding in-app payments. When Epic sued Apple in response, they sought to have the latter reinstate their developer account so they could re-release Fortnite on iOS. Apple argued that Fortnite and Epic’s developer account should not be restored as Epic intentionally breached the contract between the two companies (a contract that, of course, Epic argues is illegal.)

However, Judge Gonzalez-Rogers today ruled in favor of Apple on its counterclaim of breach of contract. “Apple’s termination of the DPLA and the related agreements between Epic Games and Apple was valid, lawful, and enforceable,” said the Judge in her ruling. Because of this, it’s unlikely Apple will ever reinstate Fortnite or Epic’s developer account, because they were found to be correct in suspending them in the first place. The Judge also ordered Epic to pay 30% the revenue the company collected from Fortnite on iOS through Epic Direct Payment since it was implemented.

The Court also ruled that Epic Games “failed in its burden to demonstrate Apple is an illegal monopolist” in the narrowly-defined “digital mobile gaming transactions” market rather than both parties’ definition of the relevant market. The market in question is a $100 billion industry, and while Apple “enjoys considerable market share of over 55% and extraordinarily high profit margins,” Epic failed to prove to the Court that Apple’s behavior violated antitrust law. “Success is not illegal,” said Judge Gonzalez-Rogers in her ruling.

Source: Apple wins some and loses some in big Epic Games lawsuit

First the judge says it was wrong to force developers to pay exclusively through Apple, then says there were other options and Apple isn’t a monopoly and then says but you have to pay Apple a 30% cut of what you made through your other payment channel. What was this judge smoking?

Jagex Blocks Release Of Popular Runescape Mod Runelite HD

Runelite HD is a mod (made by one person, 117) that takes Old School RuneScape and gives it an HD makeover.

As far back as 2018, Jagex were issuing legal threats against mods like this, claiming they were copyright infringement. However, those appeared to have blown over as Jagex gave their blessing to the original Runelite.

Yet earlier this week, just hours before the improved Runelite HD was due for an official release, 117 was contacted by Jagex, demanding that work stop and that the release be cancelled. This time, however, it’s not down to copyright claims, but because Jagex says they’re making their own HD upgrade.

[…]

While that sounds somewhat fair at first, there’s a huge problem. Runelite HD doesn’t actually seem to break any of Jagex’s modding guidelines, and the company says that new guidelines that spell out the fact Runelite HD does actually break its guidelines are being released next week.

Understandably, fans think this is incredibly shady, and have begun staging an in-game protest:

Mod creator 117 says they attempted to compromise with Jagex, even offering to remove their mod once the company had finished and released their own efforts, but, “they declined outright,” seemingly spelling the end for a project that had consumed, “approximately over 2000 hours of work over two years.”

Source: Jagex Blocks Release Of Popular Runescape Mod Runelite HD

Way to go, another company like GTA’s take two interactive, pissing off their player base.

WhatsApp fined over $260M for EU privacy violations, failng to explain how data is shared with Facebook

WhatsApp didn’t fully explain to Europeans how it uses their data as called for by EU privacy law, Ireland’s Data Protection Commission said on Thursday. The regulator hit the messaging app with a fine of 225 million euros, about $267 million.

Partly at issue is how WhatsApp share information with parent company Facebook, according to the commission. The decision brings an end to a GDPR inquiry the privacy regulator started in December 2018.

[…]

Source: WhatsApp fined over $260M for EU privacy violations – CNET

EU Bolsters Net Neutrality With Ruling Against Zero Rating

The European Union’s top court has flipped the bird to German mobile network operators Telekom Deutschland and Vodafone, ruling in two separate judgements that their practice of exempting certain services from data caps violated the bloc’s net neutrality rules.

“Zero rating” is when service providers offer customers plans that exempt certain data-consuming services (be it Spotify, Netflix, gaming, or whatever) from contributing towards data caps. Very often, those services are commercial partners of the provider, or even part of the same massive media conglomerate, allowing the provider to exert pressure on customers to use their data in a way that profits them further. This has the convenient benefit of making it easier for providers to keep ridiculous fees for data overages in place while punishing competing services that customers might use more if the zero-rating scheme wasn’t in place. No one wins, except for the telecom racket.

Net neutrality is the principle that telecom providers should treat all data flowing over their networks equally, not prioritizing one service over the other for commercial gain. As Fortune reported, the version of net neutrality rules passed in the European Union in 2015 was at the time weaker than Barack Obama-era rules in the U.S., as they didn’t explicitly ban zero rating. That’s no longer the case, as Donald Trump appointees at the Federal Communications Commission nuked the U.S.’s net neutrality rules in 2017, and a series of subsequent regulatory decisions and court rulings in the EU narrowed the scope of zero-rating practices there.

In 2016, EU regulators found that zero rating would be allowed so long as the zero-rated services were also slowed down when a customer ran up against a data cap, according to Fortune. In 2020, the Court of Justice of the European Union (CJEU) confirmed that interpretation and found it was illegal to block or slow down data after a user hit their cap on the basis that a particular service wasn’t part of a zero-rating deal. Still, carriers in the EU have continued to offer zero-rating plans, relying on perceived loopholes in the law.

The CJEU ruled on two separate cases involving Telekom and Vodafone on Thursday, which according to Reuters were brought by Germany’s Federal Network Agency (BNetzA) regulatory agency and VZBV consumer association respectively. At issue in the Telekom case was its “StreamOn” service, which exempts streaming services that work with the company from counting towards data caps—and throttles all video streaming, regardless of whether it’s from one of the StreamOn partners, when the cap is hit. The Vodafone case involved its practice of counting zero-rated services or mobile hotspot traffic towards data cap—advertising those plans with names like “Music Pass” or “Video Pass,” according to Engadget—when a customer leaves Germany to travel somewhere else in the EU.

Both of the companies’ plans violated net neutrality principles, the CJEU found, in a completely unambiguous decision titled “‘Zero tariff options are contrary to the regulation on open internet access.“ Fortune wrote that BNetzA has already concluded that the court’s decision means that Telekom will likely not be able to continue StreamOn in its “current form.”

“By today’s judgments, the Court of Justice notes that a ‘zero tariff’ option, such as those at issue in the main proceedings, draws a distinction within Internet traffic, on the basis of commercial considerations, by not counting towards the basic package traffic to partner applications,” the CJEU told media outlets in a statement. “Such a commercial practice is contrary to the general obligation of equal treatment of traffic, without discrimination or interference, as required by the regulation on open Internet access.”

The court added, “Since those limitations on bandwidth, tethering or on use when roaming apply only on account of the activation of the ‘zero tariff’ option, which is contrary to the regulation on open Internet access, they are also incompatible with EU law.”

Source: EU Bolsters Net Neutrality With Ruling Against Zero Rating

Sky Broadband sends Subscribers browsing data through to Premier League without user knowledge or consent

UK ISP Sky Broadband is monitoring the IP addresses of servers suspected of streaming pirated content to subscribers and supplying that data to an anti-piracy company working with the Premier League. That inside knowledge is then processed and used to create blocklists used by the country’s leading ISPs, to prevent subscribers from watching pirated events.

[…]

In recent weeks, an anonymous source shared a small trove of information relating to the systems used to find, positively identity, and then ultimately block pirate streams at ISPs. According to the documents, the module related to the Premier League work is codenamed ‘RedBeard’.

The activity appears to start during the week football matches or PPV events take place. A set of scripts at anti-piracy company Friend MTS are tasked with producing lists of IP addresses that are suspected of being connected to copyright infringement. These addresses are subsequently dumped to Amazon S3 buckets and the data is used by ISPs to block access to infringing video streams, the documents indicate.

During actual event scanning, content is either manually or fingerprint matched, with IP addresses extracted from DNS information related to hostnames in media URLs, load balancers, and servers hosting Electronic Program Guides (EPG), all of which are used by unlicensed IPTV services.

Confirmed: Sky is Supplying Traffic Data to Assist IPTV Blocking

The big question then is how the Premier League’s anti-piracy partner discovers the initial server IP addresses that it subsequently puts forward for ISP blocking.

According to documents reviewed by TF, information comes from three sources – the anti-piracy company’s regular monitoring (which identifies IP addresses and their /24 range), manually entered IP addresses (IP addresses and ports), and a third, potentially more intriguing source – ISPs themselves.

“ISPs provide lists of Top Talker IP addresses, these are the IP addresses that they see on their network which many consumers are receiving a large sum of bandwidth from,” one of the documents reveals.

“The IP addresses are the uploading IP address which host information which the ISP’s customers are downloading information from. They are not the IP addresses of the ISP’s customer’s home internet connections.”

The document revealing this information is not dated but other documents in the batch reference dates in 2021. At the time of publishing date, the document indicates that ISP cooperation is currently limited to Sky Broadband only. TorrentFreak asked Friend MTS if that remains the case or whether additional ISPs are now involved.

[…]

Source: Sky Subscribers’ Piracy Habits Directly Help Premier League Block Illegal Streams * TorrentFreak

Apple stalls CSAM auto-scan on devices after ‘feedback’ from everyone on Earth – will still scan all your pics at some point

Apple on Friday said it intends to delay the introduction of its plan to commandeer customers’ own devices to scan their iCloud-bound photos for illegal child exploitation imagery, a concession to the broad backlash that followed from the initiative.

“Previously we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them and to help limit the spread of Child Sexual Abuse Material,” the company said in a statement posted to its child safety webpage.

“Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

[…]

Apple – rather than actually engaging with the security community and the public – published a list of Frequently Asked Questions and responses to address the concern that censorious governments will demand access to the CSAM scanning system to look for politically objectionable images.

“Could governments force Apple to add non-CSAM images to the hash list?” the company asked in its interview of itself, and then responded, “No. Apple would refuse such demands and our system has been designed to prevent that from happening.”

Apple however has not refused government demands in China with regard to VPNs or censorship. Nor has it refused government demands in Russia, with regard to its 2019 law requiring pre-installed Russian apps.

Tech companies uniformly say they comply with all local laws. So if China, Russia, or the US were to pass a law requiring on-device scanning to be adapted to address “national security concerns” or some other plausible cause, Apple’s choice would be to comply or face the consequences – it would no longer be able to say, “We can’t do on-device scanning.”

Source: Apple stalls CSAM auto-scan on devices after ‘feedback’ from everyone on Earth • The Register

Lenovo pops up tips on its tablets. And by tips, Lenovo means: Unacceptable ads

Lenovo has come under fire for the Tips application on its tablets, which has been likened to indelible adware that forces folks to view ads.

One customer took to the manufacturer’s support forum late last month to say they were somewhat miffed to see an ad suddenly appear on screen to join Amazon Music on their Android-powered Lenovo Tab P11. The advertisement was generated as a push notification by the bundled Tips app.

“There is no option to dismiss,” the fondleslab fondler sighed. “You have to click to find out more. Further, these notifications cannot be disabled, nor can the Lenovo ‘Tips’ app be disabled.”

They went on to say: “This is not a tip. This is a push that is advertising a paid service. I loathe this sort of thing.”

Another chipped in: “I have a Lenovo Tab that also has this bloatware virus installed. There’s no way to disable the adverts (they call the ads tips, they’re not, they’re adverts for Amazon music etc.) This is ridiculous, Lenovo, I didn’t spend £170 on a tablet to be pumped with ads. Will not buy another Lenovo product.”

[…]

Source: Lenovo pops up tips on its tablets. And by tips, Lenovo means: Unacceptable ads • The Register

Huge GTA San Andreas Mod Because Of Take-Two Harassment

After months of Take-Two Interactive attacking and fighting GTA modders, the folks behind the long-in-development San Andreas mod, GTA Underground, have killed the project and removed it from the web over “increasing hostility” from Take-Two and fears of further legal problems.

Over the last few months, Take-Two Interactive — the parent company of GTA devs Rockstar Games —has gone on a digital murder spree, sending multiple takedown notices to get old 3D-era GTA mods and source ports removed from the internet. The publisher is also suing the creators behind reverse-engineered source ports of Vice City and GTA III. As a result of this hostility, GTA Underground lead developer dkluin wrote in a post yesterday on the GTAForums that they and the other modders working on the project were now “officially ceasing the development” of GTA: Underground.

“Due to the increasing hostility towards the modding community and imminent danger to our mental and financial well-being,” explained dkluin, “We sadly announce that we are officially ceasing the development of GTA: Underground and will be shortly taking all official uploads offline.”

Dkluin also thanked the community for the support they received over the last six years and mentioned all the “incredible work” that went into the mod and the “great times” the team experienced working on it together. A final video, simply named “The End.” was uploaded today on the modding team’s YouTube channel.

GTA Underground is a mod created for GTA San Andreas with the goal of merging all of the previous GTA maps into one mega environment. The mod even aimed to bring other cities from non-GTA games developed by Rockstar into San Andreas, including the cities featured in Bully and Manhunt.

The mod had already faced some problems from Take-Two in July. As result, it was removed from ModDB. It is now removed from all other official sources and sites.

In 2018, Kotaku interviewed dkluin about the mod and all the work going into it. He had started development on it back in 2014, when he was only 14 years old. GTA Underground isn’t a simple copy-and-paste job, instead, the modders added AI and traffic routines to every map, making them fully playable as GTA cities. The team also had plans to add more cities to the game, including their own custom creations.

[…]

Source: Fan Dev Shuts Down Huge GTA San Andreas Mod Because Of Take-Two

Way to piss off your fan base