Executive at Swiss Tech Company Said to Operate Secret Surveillance Operation

The co-founder of a company that has been trusted by technology giants including Google and Twitter to deliver sensitive passwords to millions of their customers also operated a service that ultimately helped governments secretly surveil and track mobile phones, Bloomberg reported Monday, citing former employees and clients. From the report: Since it started in 2013, Mitto AG has established itself as a provider of automated text messages for such things as sales promotions, appointment reminders and security codes needed to log in to online accounts, telling customers that text messages are more likely to be read and engaged with than emails as part of their marketing efforts. Mitto, a closely held company with headquarters in Zug, Switzerland, has grown its business by establishing relationships with telecom operators in more than 100 countries. It has brokered deals that gave it the ability to deliver text messages to billions of phones in most corners of the world, including countries that are otherwise difficult for Western companies to penetrate, such as Iran and Afghanistan. Mitto has attracted major technology giants as customers, including Google, Twitter, WhatsApp, Microsoft’s LinkedIn and messaging app Telegram, in addition to China’s TikTok, Tencent and Alibaba, according to Mitto documents and former employees.

But a Bloomberg News investigation, carried out in collaboration with the London-based Bureau of Investigative Journalism, indicates that the company’s co-founder and chief operating officer, Ilja Gorelik, was also providing another service: selling access to Mitto’s networks to secretly locate people via their mobile phones. That Mitto’s networks were also being used for surveillance work wasn’t shared with the company’s technology clients or the mobile operators Mitto works with to spread its text messages and other communications, according to four former Mitto employees. The existence of the alternate service was known only to a small number of people within the company, these people said. Gorelik sold the service to surveillance-technology companies which in turn contracted with government agencies, according to the employees.

Source: Executive at Swiss Tech Company Said to Operate Secret Surveillance Operation – Slashdot

Life360 Reportedly Sells Location Data of Families and Kids

Life360, a popular tracking app that bills itself as “the world’s leading family safety service,” is purportedly selling location data on the 31 million families and kids that use it to data brokers. The chilling revelation may make users of the Tile Bluetooth tracker, which is being bought by Life360, think twice before continuing to use the device.

Life360’s data selling practices were revealed in a damning report published by the Markup on Monday. The report claims that Life360 sells location data on its users to roughly a dozen data brokers, some of which have sold data to U.S. government contractors. The data brokers then proceed to sell the location data to “virtually anyone who wants to buy it.” Life360 is purportedly one of the largest sources of data for the industry, the outlet found.

While selling location data on families and kids is already alarming, what’s even more frightening is that Life360 is purportedly failing to take steps to protect the privacy of the data it sells. This could potentially allow the location data, which the company says is anonymized, to be linked back to the people it belongs to.

[…]

Source: Life360 Reportedly Sells Location Data of Families and Kids

Documents Shows Just How Much The FBI Can Obtain From Encrypted Communication Services

There is no “going dark.” Consecutive FBI heads may insist there is, but a document created by their own agency contradicts their dire claims that end-to-end encryption lets the criminals and terrorists win.

Andy Kroll has the document and the details for Rolling Stone:

[I]n a previously unreported FBI document obtained by Rolling Stone, the bureau claims that it’s particularly easy to harvest data from Facebook’s WhatsApp and Apple’s iMessage services, as long as the FBI has a warrant or subpoena. Judging by this document, “the most popular encrypted messaging apps iMessage and WhatsApp are also the most permissive,” according to Mallory Knodel, the chief technology officer at the Center for Democracy and Technology.

The document [PDF] shows what can be obtained from which messaging service, with the FBI noting WhatsApp has plenty of information investigators can obtain, including almost real time collection of communications metadata.

WhatsApp will produce certain user metadata, though not actual message content, every 15 minutes in response to a pen register, the FBI says. The FBI guide explains that most messaging services do not or cannot do this and instead provide data with a lag and not in anything close to real time: “Return data provided by the companies listed below, with the exception of WhatsApp, are actually logs of latent data that are provided to law enforcement in a non-real-time manner and may impact investigations due to delivery delays.”

The FBI can obtain this info with a pen register order — the legal request used for years to obtain ongoing call data on targeted numbers, including numbers called and length of conversations. With a warrant, the FBI can get even more information. A surprising amount, actually. According to the document, WhatsApp turns over address book contacts for targeted users as well as other WhatsApp users who happen to have the targeted person in their address books.

Combine this form of contact chaining with a few pen register orders, and the FBI can basically eavesdrop on hundreds of conversations in near-real time. The caveat, of course, is that the FBI has no access to the content of the conversations. That remains locked up by WhatsApp’s encryption. Communications remain “warrant-proof,” to use a phrase bandied about by FBI directors. But is it really?

If investigators are able to access the contents of a phone (by seizing the phone or receiving permission from someone to view their end of conversations), encryption is no longer a problem. That’s one way to get past the going darkness. Then there’s stuff stored in the cloud, which can give law enforcement access to communications despite the presence of end-to-end encryption. Backups of messages might not be encrypted and — as the document points out — a warrant will put those in the hands of law enforcement.

If target is using an iPhone and iCloud backups enabled, iCloud returns may contain WhatsApp data, to include message content.

[…]

Source: Documents Shows Just How Much The FBI Can Obtain From Encrypted Communication Services | Techdirt

U.S. Indicts Two Men for Running a $20 Million YouTube Content ID Scam – after 4 years of warnings

Two men have been indicted by a grand jury for running a massive YouTube Content ID scam that netted the pair more than $20m. Webster Batista Fernandez and Jose Teran managed to convince a YouTube partner that the pair owned the rights to 50,000+ tracks and then illegally monetized user uploads over a period of four years.

[…]

YouTube previously said that it paid $5.5 billion in ad revenue to rightsholders from content claimed and monetized through Content ID but the system doesn’t always work exactly as planned.

Over the years, countless YouTube users have complained that their videos have been claimed and monetized by entities that apparently have no right to do so but, fearful of what a complaint might do to the status of their accounts, many opted to withdraw from battles they feared they might lose.

[…]

Complaints are not hard to find. Large numbers of YouTube videos uploaded by victims of the scam dating back years litter the platform, while a dedicated Twitter account and a popular hashtag have been complaining about MediaMuv since 2018.

 

Mediamuv
 

As early as 2017, complaints were being made on YouTube/Google’s support forums, with just one receiving more than 150 replies.

“I want to make a claim through this place, since a few days ago a said company called MEDIAMUV IS STEALING CONTENT FROM MY CHANNEL AND FROM OTHER USERS, does anyone know something about said company?” one reads.

“[I] investigated and there is nothing in this respect. I only found a channel saying that several users are being robbed and that when they come to upload their own songs, MEDIAMUV detects the videos as theirs.”

[…]

Source: U.S. Indicts Two Men for Running a $20 Million YouTube Content ID Scam * TorrentFreak

Qualcomm’s new always-on smartphone camera is always looking out for you

“Your phone’s front camera is always securely looking for your face, even if you don’t touch it or raise to wake it.” That’s how Qualcomm Technologies vice president of product management Judd Heape introduced the company’s new always-on camera capabilities in the Snapdragon 8 Gen 1 processor set to arrive in top-shelf Android phones early next year.

[…]

But for those of us with any sense of how modern technology is used to violate our privacy, a camera on our phone that’s always capturing images even when we’re not using it sounds like the stuff of nightmares and has a cost to our privacy that far outweighs any potential convenience benefits.

Qualcomm’s main pitch for this feature is for unlocking your phone any time you glance at it, even if it’s just sitting on a table or propped up on a stand. You don’t need to pick it up or tap the screen or say a voice command — it just unlocks when it sees your face. I can see this being useful if your hands are messy or otherwise occupied (in its presentation, Qualcomm used the example of using it while cooking a recipe to check the next steps). Maybe you’ve got your phone mounted in your car, and you can just glance over at it to see driving directions without having to take your hands off the steering wheel or leave the screen on the entire time.

[…]

Qualcomm is framing the always-on camera as similar to the always-on microphones that have been in our phones for years. Those are used to listen for voice commands like “Hey Siri” or “Hey Google” (or lol, “Hi Bixby”) and then wake up the phone and provide a response, all without you having to touch or pick up the phone. But the difference is that they are listening for specific wake words and are often limited with what they can do until you do actually pick up your phone and unlock it.

It feels a bit different when it’s a camera that’s always scanning for a likeness.

It’s true that smart home products already have features like this. Google’s Nest Hub Max uses its camera to recognize your face when you walk up to it and greet you with personal information like your calendar. Home security cameras and video doorbells are constantly on, looking for activity or even specific faces. But those devices are in your home, not always carried with you everywhere you go, and generally don’t have your most private information stored on them, like your phone does. They also frequently have features like physical shutters to block the camera or intelligent modes to disable recording when you’re home and only resume it when you aren’t. It’s hard to imagine any phone manufacturer putting a physical shutter on the front of their slim and sleek flagship smartphone.

Lastly, there have been many reports of security breaches and social engineering hacks to enable smart home cameras when they aren’t supposed to be on and then send that feed to remote servers, all without the knowledge of the homeowner. Modern smartphone operating systems now do a good job of telling you when an app is accessing your camera or microphone while you’re using the device, but it’s not clear how they’d be able to inform you of a rogue app tapping into the always-on camera.

To be honest, these things are also pretty damn scary! I understand that Americans have been habituated to ubiquitous surveillance, but here in the EU we still value our privacy and don’t like it much at all.

Ultimately, it comes down to a level of trust — do you trust that Qualcomm has set up the system in a way that prevents the always-on camera from being used for other purposes than intended? Do you trust that the OEM using Qualcomm’s chips won’t do things to interfere with the system, either for their own profit or to satisfy the demands of a government entity?

Even if you do have that trust, there’s a certain level of comfort with an always-on camera on your most personal device that goes beyond where we are currently.

Maybe we’ll just start having to put tape on our smartphone cameras like we already do with laptop webcams.

Source: Qualcomm’s new always-on smartphone camera is a potential privacy nightmare – The Verge

WhatsApp privacy policy tweaked in Europe after record fine

Following an investigation, the Irish data protection watchdog issued a €225m (£190m) fine – the second-largest in history over GDPR – and ordered WhatsApp to change its policies.

WhatsApp is appealing against the fine, but is amending its policy documents in Europe and the UK to comply.

However, it insists that nothing about its actual service is changing.

Instead, the tweaks are designed to “add additional detail around our existing practices”, and will only appear in the European version of the privacy policy, which is already different from the version that applies in the rest of the world.

“There are no changes to our processes or contractual agreements with users, and users will not be required to agree to anything or to take any action in order to continue using WhatsApp,” the company said, announcing the change.

The new policy takes effect immediately.

User revolt

In January, WhatsApp users complained about an update to the company’s terms that many believed would result in data being shared with parent company Facebook, which is now called Meta.

Many thought refusing to agree to the new terms and conditions would result in their accounts being blocked.

In reality, very little had changed. However, WhatsApp was forced to delay its changes and spend months fighting the public perception to the contrary.

During the confusion, millions of users downloaded WhatsApp competitors such as Signal.

[…]

The new privacy policy contains substantially more information about what exactly is done with users’ information, and how WhatsApp works with Meta, the parent company for WhatsApp, Facebook and Instagram.

Source: WhatsApp privacy policy tweaked in Europe after record fine – BBC News

The Amazon lobbyists who kill U.S. consumer privacy protections

In recent years, Amazon.com Inc has killed or undermined privacy protections in more than three dozen bills across 25 states, as the e-commerce giant amassed a lucrative trove of personal data on millions of American consumers.

Amazon executives and staffers detail these lobbying victories in confidential documents reviewed by Reuters.

In Virginia, the company boosted political donations tenfold over four years before persuading lawmakers this year to pass an industry-friendly privacy bill that Amazon itself drafted. In California, the company stifled proposed restrictions on the industry’s collection and sharing of consumer voice recordings gathered by tech devices. And in its home state of Washington, Amazon won so many exemptions and amendments to a bill regulating biometric data, such as voice recordings or facial scans, that the resulting 2017 law had “little, if any” impact on its practices, according to an internal Amazon document.

[…]

Source: The Amazon lobbyists who kill U.S. consumer privacy protections

This is a detailed and creepy look at how Amazon undermines protections in the US and the amount and scope of data they collect.

South Korea Is Giving Millions of Photos of all foreign travelers since 2019 to Facial Recognition Researchers

The South Korean Ministry of Justice has provided more than 100 million photos of foreign nationals who travelled through the country’s airports to facial recognition companies without their consent, according to attorneys with the non-governmental organization Lawyers for a Democratic Society.

While the use of facial recognition technology has become common for governments across the world, advocates in South Korea are calling the practice a “human rights disaster” that is relatively unprecedented.

“It’s unheard-of for state organizations—whose duty it is to manage and control facial recognition technology—to hand over biometric information collected for public purposes to a private-sector company for the development of technology,” six civic groups said during a press conference last week.

The revelation, first reported in the South Korean newspaper The Hankyoreh, came to light after National Assembly member Park Joo-min requested and received documents from the Ministry of Justice related to a April 2019 project titled Artificial Intelligence and Tracking System Construction Project. The documents show private companies secretly used biometric data to research and develop an advanced immigration screening system that would utilize artificial intelligence to automatically identify airport users’ identities through CCTV surveillance cameras and detect dangerous situations in real time.

Shortly after the discovery, civil liberty groups announced plans to represent both foreign and domestic victims in a lawsuit.

[…]

Despite this pushback, the use of the technology is increasingly used in commercial spaces and airports. This holiday season, Delta Airlines will be piloting a facial recognition boarding program in Atlanta, following similar moves by JetBlue. US Customs and Border Protection is already relying on facial recognition technology in dozens of locations.

While the South Korean government’s collaboration with the private sector is unprecedented in its scale, it  is not the only collaboration of its kind. In 2019, a Motherboard investigation revealed the Departments of Motor Vehicles in numerous states had been selling names, addresses and other personal data to insurance or tow companies and to private investigators.

Source: South Korea Is Giving Millions of Photos to Facial Recognition Researchers

Which governments censor the tech giants the most?

Note: these numbers do not take into account the amount of secret removal requests from governments, which are probably most in the US (also see https://www.linkielist.com/global-domination/us-judge-rules-twitter-cant-be-transparent-about-amount-of-surveillance-requests-processed-per-year-due-to-national-security-of-the-4th-reich/)

In 2009, Google started recording the number of content removal requests it received from courts and government agencies all over the world, disclosing the figures on a six-month basis. Soon after, several other companies followed suit, including Twitter, Facebook, Microsoft, and Wikimedia.

This year, we’ve extended our study of the above to include Pinterest, Dropbox, Reddit, LinkedIn, TikTok, and Tumblr. Our study looks at the number of content removal requests by platform, which countries have the highest rates of content removal per 100,000 internet users, and how things have changed on a year-by-year basis.

What did we find?

Some governments avidly try to control online data, whether this is on social media, blogs, or both. And not all of the worst offenders may be who you expect.

Top 10 countries by number of content removal requests

According to our findings, the countries with the highest rate of content removal requests per 100,000 internet users are:

  1. Monaco – 341 content removal requests per 100,000 internet users
  2. Russia – 146 content removal requests per 100,000 internet users
  3. Turkey – 138 content removal requests per 100,000 internet users
  4. France – 97 content removal requests per 100,000 internet users
  5. Israel – 91 content removal requests per 100,000 internet users
  6. Liechtenstein – 68 content removal requests per 100,000 internet users
  7. Pakistan – 62 content removal requests per 100,000 internet users
  8. South Korea – 49 content removal requests per 100,000 internet users
  9. Mexico – 49 content removal requests per 100,000 internet users
  10. Japan – 49 content removal requests per 100,000 internet users

With 130 content removal requests to less than 39,000 internet users, Monaco has had the most content removal requests per 100,000 internet users. The majority of these (116) were directed at Facebook with over 98 percent in 2019.

In second and third place are Russia and Turkey with 146 and 138 content removal requests per 100,000 internet users respectively. Russia had 179,013 requests in total with 69 percent of these being directed toward Google. In contrast, Turkey had 90,696 requests in total with the majority of these (55 percent) being directed toward Twitter.

We’ll delve into the whats and whys of these removals below. But which countries submitted the most requests overall?

If we switch the top 10 to be the countries that submitted the highest number of requests overall, things do change slightly:

  1. Russia – 179,013 content removal requests submitted in total. The majority of these (69 percent) were directed toward Google
  2. India – 97,631 content removal requests submitted in total. The majority of these (76 percent) were directed toward Facebook
  3. Turkey – 90,696 content removal requests submitted in total. The majority of these (55 percent) were directed toward Twitter
  4. Japan – 56,861 content removal requests submitted in total. The majority of these (98 percent) were directed toward Twitter
  5. France – 54,627 content removal requests submitted in total. The majority of these (80 percent) were directed toward Facebook
  6. Mexico – 45,671 content removal requests submitted in total. The majority of these (99 percent) were directed toward Facebook
  7. Brazil – 36,151 content removal requests submitted in total. The majority of these (72 percent) were directed toward Facebook
  8. South Korea – 24,658 content removal requests submitted in total. The majority of these (44 percent) were directed toward Twitter
  9. Pakistan – 23,377 content removal requests submitted in total. The majority of these (84 percent) were directed toward Facebook
  10. Germany – 19,040 content removal requests submitted in total. The majority of these (68 percent) were directed toward Facebook

Russia outranked all other countries with a 6-digit figure for government content requests, making 179,765 requests across all platforms. It’s also the highest-ranking country for the number of requests submitted to Google, Reddit, TikTok, and Dropbox.

Interesting, too, is how the United Kingdom and the United States rank in eleventh and twelfth place respectively for the number of content requests submitted. The UK had 17,406 content removal requests in total with 64 percent being submitted to Facebook. Meanwhile, the US had 12,474 in total with 80 percent submitted to Google. In relation to the number of internet users, however, the UK submitted 27 per 100,000 and the US just 4 per 100,000. This places them 16th and 50th in the number of requests per 100,000 internet user rankings respectively.

Highest content removal requests by platform

Now we know which countries have submitted the most requests, which country comes out on top for each platform?

  • Google: Russia accounts for 60 percent of requests – 123,607 of 207,066
  • Facebook: India accounts for 24 percent of requests – 74,674 of 308,434
  • Twitter: Japan accounts for 31 percent of requests – 55,590 of 181,689
  • Microsoft: China accounts for 52 percent of requests – 8,665 of 16,817
  • Pinterest: South Korea accounts for 46 percent of requests – 2,345 of 5,134
  • Tumblr: South Korea accounts for 71 percent of requests – 2,260 of 3,193
  • Wikimedia: United States accounts for 23 percent of requests – 977 of 4,256
  • Dropbox: Russia accounts for 34 percent of requests – 752 of 2,217
  • TikTok: Russia accounts for 24 percent of requests – 150 of 620
  • Reddit: Russia accounts for 29 percent of requests – 143 of 488
  • LinkedIn: China accounts for 71 percent of requests – 72 of 102

What about China’s lower rankings across every category but Microsoft?

China tends not to bother going through content providers and their in-house reporting mechanisms to censor content. It simply blocks entire sites and apps outright, forcing internet service providers to bar access on behalf of the government. China has banned all of the websites we have used in this comparison, except for LinkedIn and some of Microsoft’s services–the two areas where it dominates the content removal requests.

Which tech giant is receiving the highest percentage of removal requests in each country?

If we look at which tech giant is receiving the highest percentage of removal requests in each country, we can see that Google and Facebook tend to receive the vast majority.

Tech giant government content removal requests

Many Central European, South East Asian, and some South American countries submit the majority of their removal requests to Facebook, while many African and Eastern European countries, as well as the US, Canada, and Australia, submit most of theirs to Google. A large number of Middle Eastern countries submit the majority of requests to Twitter.

Biggest years for government content removal requests

Following a slight dip in 2019 (a 2 percent decrease on the number of requests submitted in 2018), removal requests bounced back up by 69 percent from 2019 to 2020. Twitter accounted for the largest percentage of these requests with 80,744 (40 percent) of the 203,698 requests submitted in total. It was closely followed by Facebook (62,314 or 31 percent) and Google (44,065 or 22 percent).

Over the years, these platforms have made the most content removal requests. But, when you take into consideration that all three are the highest used of all the platforms we’ve covered, that’s perhaps no surprise.

However, what the above does show us is how the focus on platforms has changed over the years.

Facebook’s biggest year for content removal requests came in 2015 when 76,395 requests were submitted (25 percent of its overall total). These requests then dropped significantly in 2016 before increasing by 155 and 21 percent from 2016 to 2017 and 2017 to 2018 respectively. Figures then dropped by 34 percent from 2018 to 2019 before almost doubling again from 2019 to 2020.

Google also witnessed a similar drop in 2019 when requests dipped by 30 percent, having been growing by around 10,000 each year from 2016 to 2018. In 2020, the number of requests rose again by 46 percent.

Twitter, however, didn’t follow this trend. In 2019, Twitter saw a 97 percent increase in the number of requests submitted (rising from 23,464 in 2018 to 46,291 in 2019). The number of content removals submitted to Twitter continued to rise significantly in 2020, too, when they nearly doubled to 80,744. In fact, of all the platforms we’ve studied, Twitter is the only platform (bar LinkedIn and Reddit which have only recently begun to submit reports) that has noticed an increase in content removal requests each and every year.

Why does Twitter appear to be dominating content removal requests? After all, it doesn’t have the largest number of users (it has around 396.5m users compared to Facebook’s 2.8bn).

The majority of the increase comes from Japan, India, South Korea, and Indonesia. As we’ll see further on, Japan Twitter has recently been under fire for censoring government critics. Other reasons could be increases in scams, misinformation around elections, and general violations of local laws.

Russia accounts for 60 percent of Google’s content removal requests

As mentioned previously, Russia dominates the number of content request removals made to Google, accounting for 123,607 (60 percent) in total. Despite Russia’s requests dropping from over 30,000 in 2018 to just under 20,000 in 2019, they jumped back up to a record-breaking 31,384 in 2020. This dip in 2019 was a worldwide trend, however, with a 30 percent decrease in removal requests in 2019 followed by a 46 percent increase in 2020.

Nearly 34 percent of Russia’s requests come under the reason of national security, closely followed by copyright (26 percent) and regulated goods and services (18 percent).

Russia’s requests are significantly higher than second-place Turkey, which sent just 14,242 requests–7 percent of all requests received. Turkey was closely followed by India (10,138 with 4.89 percent) and the US (9,933 with 4.79 percent). Defamation is the main reason for all of these countries’ requests, accounting for 39 percent of Turkey’s total, 27 percent of India’s total, and 58 percent of the US’s total.

Which of Google’s products are being targeted by these removal requests?

YouTube and web searches are all prime targets for these removal requests. Of all the requests, 50 percent are directed toward YouTube and 30 percent toward web searches.

Examples of Google content removal requests

Some examples of the requests submitted by Russia, Turkey, and India include:

Russia: “Roskomnadzor requested that we block a Russian-language summary of a Financial Times report claiming that the content was “extremist”. The article stated that the real number of coronavirus deaths in Russia is potentially 70% higher than what official statistics report.” – The content wasn’t removed, which was, in part, due to errors in the way the request had been served. This included procedural defects in the way the request was served (Jan-Jun, 2020).

Turkey: “We received a court order to delist 5 URLs from Google Search and to remove 1 Blogger blog post on the basis of “right-to-be-forgotten” legislation, on behalf of a high-ranking official. The news articles reported accusations of organised crime, which allegedly led to a criminal complaint.” – The URLs were not delisted or removed (Jul-Dec, 2020).

Turkey: “We received a court order to remove 2 Google Groups posts, 2 Blogger posts, 1 Blogger image, and an entire Blogger blog publishing political caricatures of a very senior Government official of Turkey.” – The content was not removed (Jul-Dec, 2016).

India: “We received multiple requests from Indian law enforcement for 173 YouTube URLs depicting content related to COVID-19. The reported content ranged from conspiracy theories and religious hate speech related to COVID-19 to news reports and criticism of the authorities’ handling of the pandemic.” 14 URLs were removed for violating YouTube’s community guidelines, 30 URLs were restricted in India based on cited local laws. Further information was requested for 106 URLs, of which 10 URLs were not removed and 13 URLs were already down.

India accounts for 24 percent of Facebook’s content removal requests

Facebook received the largest number of government content requests overall with 308,434 in total. India made up for the vast majority of these, with its 74,674 requests accounting for nearly 25 percent of the total. Most of India’s requests (40 percent) were made in 2015 when 30,126 requests were submitted. Since then, India’s requests have remained much lower, only reaching two or three thousand per year, except for in 2018 when requests spiked again at just over 19,000.

Interestingly, in 2015, the Supreme Court of India struck down section 66A of the Information Technology Act, 2000, which made posting “offensive” comments online a crime that was punishable by jail. Perhaps this led to an influx in offensive comments on mediums like Facebook, or authorities turned to Facebook’s content removal system to try and combat things differently.

In second place for removal requests via Facebook is Mexico with 45,217. Most of these requests (45 percent) were placed in the first half of 2017, shortly after Mexico first started submitting removal requests (its first figures are recorded for the latter part of 2016). Therefore, Mexican officials were perhaps “catching up” on the content that they thought violated local law. Mexico’s removal requests dropped dramatically in 2018 (2,040 submitted in total) before rising in 2019 (by 240 percent to 6,946) and in 2020 (by 93 percent to 13,399).

Mexico was closely followed by France with 43,816 requests. Again, the majority of these requests were submitted years ago (37,695 or 86 percent were submitted in the second half of 2015). But unlike Mexico, France’s requests have continued to decline year on year with just 298 submitted in all of 2020. This dramatic peak in removal requests does coincide with the November 2015 terror attacks in Paris.

Oddly, the US doesn’t feature anywhere near the top for removal requests, ranking 57th for its mere 27 removal requests since reporting began. Facebook’s Transparency Report suggests a country might not make the list either because Facebook’s services aren’t available there or there haven’t been any items of this type to report. The US doesn’t fall into the former, but the latter doesn’t seem likely either, especially when you consider the United States’s removal requests across other platforms. Furthermore, there is a case study (like the ones depicted below) for the US, which suggests:

“We received a request from a county prosecutor’s office to remove a page opposing a county animal control agency, alleging that the page made threatening comments about the director of the agency and violated laws against menacing.” Facebook reviewed the page and found there to be no credible threats so it, therefore, didn’t violate their Community Standards. (Oct 2015)

Examples of Facebook content removal requests

India: “We received a request from law enforcement in India to remove a photo that depicted a sketch of the Prophet Mohammed.” – The content didn’t violate Facebook’s Community Standards but was made unavailable in India where any depiction of Mohammed is forbidden. (Jun 2016)

France: “Following the November 2015 terrorist attacks in Paris, we received a request from L’Office Central de Lutte Contre la Criminalité Liée aux Technologies de l’Information et de la Communication (OCLCTIC), a division of French law enforcement, to remove a number of instances of a photo taken inside the Bataclan concert venue depicting the remains of several victims. The photo was alleged to violate French laws related to protecting human dignity.” – The content didn’t violate Facebook’s Community Standards but 32,100 instances of the photo were restricted in France. It was still available in other countries. (Nov 2015)

Mexico: “We received a request from the Mexican Federal Electoral Court to remove 239 items in connection with two complaints filed by the Partido de la Revolución Democrática (“PRD”) against Governmental Entities in Mexico. The PRD alleged that the content violated Mexico’s election laws.” – The content didn’t violate Facebook’s Community Standards but access to 63 posts were restricted in Mexico as they were deemed unlawful. 159 items were duplicated or had already been removed. (Jan 2020)

Japan accounts for 31 percent of Twitter’s content removal requests Twitter

Japan had the largest number of government content requests on Twitter with 55,590 requests submitted in total. This made up for 31 percent of all of the requests recorded by Twitter. Most of these requests (36,573 or 66 percent) were submitted in 2020. In fact, Japan’s content removal requests to Twitter have increased dramatically in recent years, jumping by 1,916 percent from 2018 to 2019 (from 875 to 17,640) and by 107 percent from 2019 to 2020 (from 17,640 to 36,573).

While the removal requests across Twitter have increased on a yearly basis (worldwide), Japan’s growth exceeds the worldwide average of 97 percent from 2018 to 2019 and 74 percent from 2019 to 2020. This comes amid recent reports that Twitter Japan seems to be suspending government critics. However, Twitter’s official report suggests the majority of the removal requests relate to laws surrounding narcotics and psychotropics, obscenity, or money lending.

In second place was Turkey with 49,525 requests, followed by Russia with 36,787 requests. Although Russia follows Japan’s trend with yearly increases in removal requests (99 percent from 2018 to 2019 and 54 percent from 2019 to 2020), Turkey’s removal requests are in decline (dropping by 20 percent from 2018 to 2019 and by 28 percent from 2019 to 2020).

Examples of Twitter content removal requests

Turkey: “Twitter received a court order from Turkey regarding two Tweets containing insulting language towards a high-level official of a prominent bank in Turkey for violation of personal rights. Twitter withheld both Tweets in Turkey in response to the court order.” (Jul-Dec, 2020)

Russia: “We received the first Periscope removal request from Roskomnadzor concerning a prisoner’s account. Citing Article 82 of the Russian Criminal Executive Code, the reporter asked us to ‘block the account from which the violating broadcast was made’. However, the reported account had no broadcasts, so we did not take any action.” (Jan-Jun 2017)

France: “We withheld one Tweet in response to a legal demand from the Office Central de Lutte contre la Criminalité liée aux Technologies de l’Information et de la Communication (OCLCTIC) for glorification of terrorist attacks.” (Jul-Dec 2017)

China accounts for 52 percent of Microsoft’s content removal requests

As we have already seen, China barely features across all of the aforementioned removal platforms for its content removal requests. This is due to the widespread blocking of these platforms, which removes the need for such requests. However, as some of Microsoft’s products are available in China, it accounts for over half of all the requests submitted to this tech giant.

Unfortunately, Microsoft doesn’t offer any insight into why the content removal requests are submitted. What it does indicate, however, is how many requests result in any action being taken. From July to December 2020, 96 percent of China’s requests were actioned. Russia (the second-highest submitter of requests) had just 41 percent of its requests actioned, while France had 89 percent.

Since the second half of 2018, China has always submitted over 1,000 removal requests every six months to Microsoft. Russia, however, upped its requests significantly in the second half of 2019, submitting nearly 300 percent more than the first half (2,951 compared to 743). But these started to drop off again in 2020, reducing by 45 percent and 58 percent in the first and second half of 2020, respectively.

Content removal requests across other platforms

Google, Facebook, Twitter, and Microsoft account for the vast majority of content removal requests, but the following also show interesting insights into where governments are focusing their online censorship efforts.

Dropbox

Russia submitted 34 percent of all the content removal requests to Dropbox, followed by France with 24 percent and the UK with 21 percent. Russia’s requests peaked in 2017 with 243 of its 752 (32 percent) requests submitted during this time. France’s came in 2018 with 63 percent of its total (331 of 524) submitted then. The UK also submitted the majority of its (41 percent) in 2017.

Since 2017/18, Dropbox’s removal requests have decreased quite significantly, falling by 38 percent from 2018 to 2019 and by 52 percent from 2019 to 2020.

Dropbox doesn’t provide insight into the types of content removal requests that are submitted but does appear to action the majority of requests it receives for most countries. For example, the US submitted 33 requests which affected 45 accounts. All but 2 of these accounts had action taken against them. However, of the 48 requests submitted by Russia in 2020, which affected 13 accounts, only 7 accounts had content blocked on them.

LinkedIn

LinkedIn receives very few content removal requests according to its transparency report and the vast majority of these are submitted by China. 42 out of 50 of the requests in 2020 came from China with only 14 countries having ever submitted one of these reports in the last three years (from 2018 to 2020).

Pinterest

The number of requests submitted to Pinterest has grown significantly within the last two reporting periods, increasing by 500 percent from 2019 to 2020 (from 680 to 4,078). South Korea and Russia account for the majority of these requests, submitting 46 and 43 percent of the total requests respectively.

Most of South Korea’s requests (99 percent) came in 2020 while Russia has been upping its requests since 2018. Russia submitted 102 in 2018, increasing by 376 percent to 486 in 2019 before rising by a further 234 percent to 1,622 in 2020.

Most of the content removal requests submitted to Pinterest are due to violations of community guidelines. For example, in 2020, 90 percent of the requests submitted were due to content that violated Pinterest’s community guidelines. No specific examples are available.

Reddit

Even though government content removal requests for Reddit have increased in recent years, the numbers are still within the low hundreds. Furthermore, as Reddit’s report demonstrates, a lot of the content that is restricted due to these requests is done so in the local area (over 71 percent of the pieces of content flagged by government requests in 2020 was only restricted in the local area).

Russia is, again, the main culprit for these requests, submitting over 29 percent of all the requests. Turkey has submitted the second-highest number of requests (100 or 20 percent) but most of these came in 2018 and 2019. In 2020, South Korea upped its requests with 60 in total (it only submitted 1 in 2019 and none before that).

No further information on the type of requests is available.

Tumblr

Data is only available from mid-2019 for Tumblr so it’s hard to conduct real comparisons on how things have changed on a year-by-year basis here. However, from the second half of 2019 to the first half of 2020, requests jumped by 229 percent (from 224 to 738) before rising by another 202 percent in the second half of 2020 (from 738 to 2,231).

South Korea dominates the requests submitted to this platform, accounting for 71 percent of all requests ever submitted. According to Tumblr’s report, 96 percent of the requests submitted by South Korea in 2020 resulted in data being removed–the global average was 95 percent.

No further details on the requests are available.

TikTok

The number of requests submitted to TikTok has been steadily increasing in recent years. Most of these have come from Russia (24 percent), India (15 percent), and Pakistan (16 percent). While India and Pakistan submitted requests in 2019 and 2020, all of Russia’s requests came in 2020 alone.

TikTok doesn’t provide an insight into the reason for the content removal requests but does give figures for how much content is affected by the requests. Pakistan’s 97 removal requests in the second half of 2020 saw the greatest amount of content affected with 14,263 pieces implicated in total. In contrast, Russia’s 135 requests implicated 429 pieces of content.

Wikimedia

From 2018 to 2019, Wikimedia’s content removal requests dropped by 35 percent (from 880 to 573), before rising again by 29 percent (from 573 to 741) from 2019 to 2020.

The United States accounts for the greatest chunk of these requests (across all years), accounting for 23 percent in total. However, the US’s requests have decreased in recent years.

What is particularly interesting about these Wikimedia content removal requests is that they are hardly ever actioned. According to the reports, only 2 of the 380 requests submitted in the second half of 2020 were actioned. Before that, the only content removal request accepted was from Ukraine in 2014. A blogger included a photo of his visa to visit Burma/Myanmar on his website. He had scrubbed his personal details from the image. The same picture later appeared on English Wikipedia in an article about the country’s visa policy. The redactions were removed and his information exposed. Given the nature of the information and the circumstances of how it was exposed, Wikimedia granted the takedown request.

Methodology

Our team extracted the data from the transparency reports for Twitter, Facebook, Microsoft, Wikimedia, Pinterest, Dropbox, Reddit, LinkedIn, TikTok, and Tumblr. We analyzed the data by country and year, while also noting any other significant details where available.

In Facebook’s latest report for the second half of 2020, every country was listed as having at least 12 removal requests. Due to the volume of countries with a 12, this appeared to be a glitch in the report as the majority of countries normally had 0. Therefore, we omitted the ones with 12 and replaced them with a 0 to avoid over-exaggerating the number of requests received.

When creating a ratio of content removal requests to internet users, we omitted two countries from the top 10–Tokelau and Cook Islands. This is due to them having 1 and 6 content removal requests in total but, because of their low populations, they were classed as having a high rate of requests per 100,000 users, which would be an unfair representation.

Sources

https://transparencyreport.google.com/government-removals/by-country?hl=en

https://transparency.twitter.com/en/removal-requests.html

https://www.microsoft.com/en-us/corporate-responsibility/crrr

https://govtrequests.facebook.com/content-restrictions

https://transparency.wikimedia.org/content.html

https://www.dropbox.com/transparency/reports

https://about.linkedin.com/transparency/government-requests-report

https://policy.pinterest.com/en/transparency-report

https://www.redditinc.com/policies/transparency-report-2020

https://www.tumblr.com/transparency

https://www.tiktok.com/safety/resources/transparency-report-2020-2

Source: Which government censors the tech giants the most? – Comparitech

Does Copyright Give Companies The Right To Search Your Home And Computer?

One reason why copyright has become so important in the digital age is that it applies to the software that many of us use routinely on our smartphones, tablets and computers. In order to run those programs, you must have a license of some kind (unless the software is in the public domain, which rarely applies to modern code). The need for a license is why we must agree to terms and conditions when we install new software. On Twitter, Alvar C.H. Freude noticed something interesting in the software licence agreement for Capture One: “world-class tools for editing, organizing and working with photos” according to the Danish company that makes it (found via Wolfie Christl). The license begins by warning:

if you do not agree to the terms of this license, you may not install or use the software but should promptly return the software to the place where you obtained it for a refund.

That’s normal enough, and merely reflects the power of copyright holders to impose “take it or leave it” conditions on users. Less common is the following:

Capture One or a third-party designated by Capture One in its sole discretion has the right to verify your compliance with this License at any time upon request including without limitation to request information regarding your installation and/or use of the Software and/or to perform on-site investigations of your installation and use of the Software.

If you use Capture One, you must provide “without limitation access to your premises, IT systems on which the Software is installed”, and “Capture One or an Auditor may decide in their sole discretion to apply software search tools in accordance with audits.”

That is, thanks to copyright, a company is perfectly able to demand the right to access a user’s premises, the computer systems they use, and to run search tools on that system as part of an audit. Although this applies to business premises, there’s no reason a software license could not demand the same right to access somebody’s home. In fact, there are really no limits on what may be required. You’re not obliged to agree to such terms, but most people do, often without even checking the details.

The fact that such requirements are possible shows how far copyright has strayed from the claimed purpose of protecting creators and promoting creativity. Copyright has mutated into a monster because it was never designed to regulate activities, as it does with software, just static objects like books and drawings.

Source: Does Copyright Give Companies The Right To Search Your Home And Computer? | Techdirt

Blizzard started with this with World of Warcraft, allowing itself to search your hard drive and memory. Many games since then have given themselves this ability, which they make use of.

Microsoft blocks workaround that let Windows 11 users avoid its Edge browser – browser wars are on again

Microsoft plans to update Windows 11 to block a workaround that has allowed users to open Start menu search results in a browser other than Edge. The loophole was popularized by EdgeDeflector, an app that allows you to bypass some of the built-in browser restrictions found in Windows 10 and 11. Before this week, companies like Mozilla and Brave had planned to implement similar workarounds to allow users to open Start menu results in their respective browsers, but now won’t be able to do so.

When the block first appeared in an early preview build of Windows 11 last week, it looked like it was added by mistake. However, on Monday, the company confirmed it intentionally closed the loophole.

“Windows openly enables applications and services on its platform, including various web browsers,” a spokesperson for Microsoft told The Verge. “At the same time, Windows also offers certain end-to-end customer experiences in both Windows 10 and Windows 11, the search experience from the taskbar is one such example of an end-to-end experience that is not designed to be redirected. When we become aware of improper redirection, we issue a fix.”

Daniel Aleksandersen, the developer of EdgeDeflector, was quick to criticize the move. “These aren’t the actions of an attentive company that cares about its product anymore,” he said in a blog post. “Microsoft isn’t a good steward of the Windows operating system. They’re prioritizing ads, bundleware, and service subscriptions over their users’ productivity.”

Mozilla was similarly critical of Microsoft. “People deserve choice. They should have the ability to simply and easily set defaults and their choice of default browser should be respected,” a spokesperson for the company told The Verge. “We have worked on code that launches Firefox when the microsoft-edge protocol is used for those users that have already chosen Firefox as their default browser. Following the recent change to Windows 11, this planned implementation will no longer be possible.”

[…]

Source: Microsoft blocks workaround that let Windows 11 users avoid its Edge browser | Engadget

Portugal: Proposed law tries to sneak in biometric mass surveillance.

Whilst the European Parliament has been fighting bravely for the rights of everyone in the EU to exist freely and with dignity in publicly accessible spaces, the government of Portugal is attempting to push their country in the opposite direction: one of digital authoritarianism.

[…]

Eerily reminiscent of the failed attempts by the Serbian government just two months ago to rush in a biometric mass surveillance law, Portugal now asked its Parliament to approve a law in a shocking absence of democratic scrutiny. Just two weeks before the national Assembly will be dissolved, the government wants Parliamentarians to quickly approve a law, without public consultation or evidence. The law would enable and encourage widespread biometric mass surveillance – even though we have repeatedly shown just how harmful these practices are.

[…]

Source: Portugal: Proposed law tries to sneak in biometric mass surveillance. – Reclaim Your Face

Woman Allegedly Made $57,000 From Unofficial Demon Slayer Cakes

A 34-year-old resident of Tokyo’s Shibuya has been arrested on suspicion of violating Japanese copyright law after selling unlicensed Demon Slayer cakes.

According to Kyodo News, the women sold the cakes through Instagram, with customers submitting their desired images to be turned into frosting, cream, and sugar. The suspect is said to have charged between 13,000 yen ($114) and 15,000 yen ($132) per cake. Since July 2019, it is believed she made over 6,500,000 yen in sales. That’s over $57,000!

It’s a lot of cakes, too.

The Metropolitan Police Department released photos of the criminal cakes in question, which can be seen in the above TBS News clip.

Source: Woman Allegedly Made $57,000 From Unofficial Demon Slayer Cakes

yay well done copyright. not.

Apple has tight control over states’ digital ID cards

Apple’s digital ID card support in iOS 15 may be convenient, but it also comes with tight requirements for the governments that use them. CNBC has learned states using Apple’s system are required to not only run the platforms for issuing and checking credentials, but hire managers to handle Apple’s requests and meet the iPhone maker’s performance reporting expectations. States also have to “prominently” market the feature and encourage other government agencies (both state and federal) to adopt the technology.

Contracts are nearly identical for Arizona, Georgia, Kentucky and Oklahoma, some of the earliest adopters of the program. That suggests other states, including Connecticut, Iowa, Maryland and Utah, may have to honor similar terms.

Apple declined to comment. A representative for Arizona’s Transportation Department told CNBC there were no payments to Apple or other “economic considerations,” though the states would have to cover the costs.

The details raise a number of concerns. While it isn’t surprising that states would have pay for at least some of the expenses, the contracts give a private company a significant amount of control over the use and promotion of government systems while asking governments to foot the bills.

[…]

Source: Apple has tight control over states’ digital ID cards | Engadget

Take Two and Rockstar Use DMCA Claims To Remove More GTA Mods

As players continue to criticize the recently released GTA Trilogy remastered collection, Rockstar Games parent company Take-Two Interactive has decided this is the perfect time to use DMCA takedown notices to remove some more GTA mods and fan projects.

On November 11, according to the folks over at the GTA modding site LibertyCity, Take-Two contacted them and used DMCA strikes to remove three different GTA-related mods. The three removed mods are listed below:

  • GTA Advance PC Port Beta 2
  • The Lost and Damned Unlocked for GTA 4
  • GTA IV EFLC The Lost And Damned (65%)

GTA Advance PC Port is a fan-developed project attempting to port the game into the GTA 3 engine. Developed by Digital Eclipse, GTA Advance was only released on the GameBoy Advance in 2004.

The Lost and Damned Unlocked for GTA IV is a mod released in 2009 which lets players swap out the star of GTA IV, Niko Bellic, with the protagonist of the Lost and Damned DLC, biker Johnny Klebitz. It also included some new biker outfits and icons.

Finally, GTA IV EEFLC (65%) isn’t even a mod! It’s just a save file for the game that lets players start from 65% completion. Yes, Take-Two used a DMCA strike against a save file for a game released over a decade ago.

These are just the latest in a growing number of GTA mods Take-Two has gone after and removed using legal DMCA notices. Over the last year, the company has been on a takedown spree, like a GTA character on a rampage. It has also sued fan devs over source code projects and led to some old mods, like GTA Underground, shutting down over fears of more legal and financial trouble.

[…]

Regardless of if these takedowns are evidence of a future GTA IV remaster or not, it still is a frustrating situation for modders and community devs who have spent decades improving, porting, and maintaining the classic GTA games, allowing fans to play them years after Rockstar had moved on. Kotaku spoke to some modders who seemed fed up with Rockstar and many more have moved on to other games from other companies, worried about the potential legal pitfalls for continuing to mod Grand Theft Auto titles.

Source: Take Two and Rockstar Use DMCA Claims To Remove More GTA Mods

Latest Windows 11 overrides attempts to avoid using Edge – browser wars again!

Back in 2017, Daniel Aleksandersen created a free helper application called EdgeDeflector to counter behavioral changes Microsoft made in the way Windows handles mouse clicks on certain web links.

Typically, https:// links get handled by whatever default browser is set for the system in question. But there are ways to register a custom protocol handler, for operating systems and web browsers, that defines the scheme to access a given resource (URI).

Microsoft did just that when it created the microsoft-edge: URI scheme. By prefixing certain links as microsoft-edge:https://example.com instead of https://example.com, the company can tell Windows to use Edge to render example.com instead of the system’s default browser.

Microsoft is not doing this for all web links – it hasn’t completely rejected browser choice. It applies the microsoft-edge:// protocol to Windows 10 services like News and Interest, Widgets in Windows 11, various help links in the Settings app, search links from the Start menu, Cortana links, and links sent from paired Android devices. Clicking on these links will normally open in Edge regardless of the default browser setting.

When the microsoft-edge:// protocol is used, EdgeDeflector intercepts the protocol mapping to force affected links to open in the user’s default browser like regular https:// links. That allows users to override Microsoft and steer links to their chosen browsers.

This approach has proven to be a popular one: Brave and Firefox recently implemented their own microsoft-edge:// URI scheme interception code to counter Microsoft’s efforts to force microsoft-edge:// links into its Edge browser.

But since Windows 11 build 22494, released last week, EdgeDeflector no longer works.

This is on top of Microsoft making it tedious to change the default browser on Windows 11 from Edge: in the system settings, you have to navigate to Apps, then Default apps, find your preferred installed browser, and then assign all the link and file types you need to that browser, clicking through the extra dialog boxes Windows throws at you. Your preferred browser may be able to offer a shortcut through this process when you install it or tell it to make it your default.

[…]

Source: Latest Windows 11 overrides attempts to avoid using Edge • The Register

Fifth Circuit Says Man Can’t Sue Federal Agencies For Allegedly Targeting and Tormenting Him After He Refused To Be An FBI Informant

The secrecy surrounding all things national security-related continues to thwart lawsuits alleging rights violations. The Fifth Circuit Court of Appeals has just dumped a complaint brought by Abdulaziz Ghedi, a naturalized American citizen who takes frequent trips to Somalia, the country he was born in. According to Ghedi’s complaint, rejecting the advances of one federal agency has subjected him to continuous hassling by a number of other federal agencies.

The Appeals Court decision [PDF] opens with a paragraph that telegraphs the futility of Ghedi’s effort, as well as the ongoing string of indignities the government has decided to inflict on people who just want to travel.

Abdulaziz Ghedi is an international businessman who regularly jets across the globe. Frequent travelers, however, are not always trusted travelers. In recent years, Ghedi has had repeated run-ins with one of America’s most beloved institutions: modern airport security.

The general indignities were replaced with seemingly more personal indignities when Ghedi decided he wasn’t interested in working part-time for the feds.

Ghedi complains that ever since he refused to be an informant for the Federal Bureau of Investigation a decade ago, he has been placed on a watchlist, leading to “extreme burdens and hardship while traveling.”

This isn’t a novel complaint. This has happened to plenty of immigrants and US citizens who visit countries the federal government finds interesting. Many, many Muslims have been approached by the FBI to work as informants. And many have reported their traveling experiences got noticeably worse when they refused to do so.

Without moving past a motion to dismiss, there can be no discovery. And national security concerns means there isn’t going to be much to discover, even if a plaintiff survives a first round of filings.

Unsurprisingly, the Government refuses to confirm or deny anything.

That put Ghedi in the crosshairs of “a byzantine structure featuring an alphabet soup of federal agencies,” as the court puts it. The DHS oversees everything. Day-to-day hassling is handled by the TSA (domestic travelers) and the CBP (international travelers). Ghedi saw more of one (CBP) than the other, but the TSA still handles screening of passengers and luggage, so he saw plenty of both.

The refusal to join the FBI as a paid informant apparently led to all of the following:

• an inability to print a boarding pass at home, requiring him to interact with ticketing agents “for an average of at least one hour, when government officials often appear and question” him;

• an SSSS designation on his boarding passes;

• TSA searches of his belongings, “with the searches usually lasting at least an hour”;

• TSA pat downs when departing the U.S. and CBP pat downs when returning to the U.S.;

• encounters with federal officers when boarding and deboarding planes;

• questioning and searches by CBP officers “for an average of two to three hours” after returning from international travel;

• CBP confiscation of his laptop and cellphone “for up to three weeks”;

• being taken off an airplane two times after boarding; and

• being detained for seven hours by DHS and CBP officials in Buffalo, New York in May 2012 and being detained in Dubai for two hours in March 2019.

Ghedi approached the DHS through its court-mandated redress program to inquire about his status twice — once in 2012 and again in 2019. In both cases, the DHS refused to confirm or deny anything about his travel status or his placement on any watchlists that might result in enhanced screening and extended conversations with federal agents every time he flew.

Ghedi sued the heads of all the agencies involved, alleging rights violations stemming from his refusal to become an informant and his apparent placement on some watchlist operated by these agencies.

Ghedi brings two Fourth Amendment claims. The first alleges that the heads of the DHS, TSA, and CBP violated his Fourth Amendment rights through “prolonged detentions,” and “numerous invasive, warrantless patdown searches” lacking probable cause. The second alleges that the heads of the DHS, TSA, and CBP also violated his Fourth Amendment rights through their agents conducting “warrantless searches of his cell phones without probable cause.” The Fourth Amendment protects “[t]he right of the people to be secure in their persons . . . and effects, against unreasonable searches and seizures.”

The district court said he had no standing to sue. The Fifth Circuit says he does. But standing to sue doesn’t matter if you sue the wrong people. The Appeals Court says there’s a plausible injury alleged here, but it wasn’t perpetrated by the named defendants.

Even though we hold that Ghedi has plausibly alleged an injury in fact, he still must satisfy standing’s second prong—that his injury is fairly traceable to these Defendants. Here Ghedi’s Fourth Amendment claims falter. That is because Ghedi bases his Fourth Amendment claims on TSA and CBP agents’ searching him and seizing his electronics. He argues these searches and seizures are atypical actions, even for people on the Selectee List. Yet instead of suing these agents directly, Ghedi has brought his Fourth Amendment claims against the heads of DHS, TSA, and CBP. Ghedi does not allege that any of these officials personally conducted or directed the searches or seizures he has experienced. And his allegations that his experiences are atypical cut against an inference that these agents are following official policy.

Not only that, but the court says Ghedi has never been prevented from traveling. At worst, traveling has become a constant hassle, marked by hours-long delays, unexplained device seizures, and plenty of unwanted conversations with federal agents. But ultimately Ghedi got where he was going and I guess that’s good enough.

Ghedi never alleges that he was prevented from ultimately getting to his final destination. At most, these allegations lead to a reasonable inference that the Government has inconvenienced Ghedi. But they do not plausibly allege a deprivation of Ghedi’s right to travel.

There are some rights the court will recognize but this isn’t one of them.

In short, Ghedi has no right to hassle-free travel. In the Supreme Court’s view, international travel is a “freedom” subject to “reasonable governmental regulation.” And when it comes to reasonable governmental regulation, our sister circuits have held that Government-caused inconveniences during international travel do not deprive a traveler’s right to travel.

And, putting the final nail in Ghedi’s litigation coffin, the Appeals Court says the government’s secrets may harm individuals but they can’t harm their reputation… because they’re secret.

As we noted at the outset, Ghedi’s status on the Selectee List is a Government secret. Simply put, secrets are not stigmas. The very harm that a stigma inflicts comes from its public nature. Ghedi pleaded no facts to support that the Government has ever published his status—one way or the other—on the Selectee List. His assertions that the Government has attached the “stigmatizing label of ‘suspected terrorist’” and “harm[ed] . . . his reputation” are legal conclusions, not factual allegations.

That’s how it goes for litigants trying to sue over rights violations perpetrated by agencies engaged in the business of national security. Allegations are tough to verify because the government refuses to confirm, deny, or even discuss a great deal of its national security work in court. Ghedi could always try this lawsuit again, perhaps armed with FOIA’ed documents pertaining to his travels and the many agencies that make it difficult for him. But that’s as unlikely to result in clarifying information for the same reason: national security.

[…]

Source: Fifth Circuit Says Man Can’t Sue Federal Agencies For Allegedly Targeting Him After He Refused To Be An FBI Informant | Techdirt

How to Stop Chrome From Sharing Your Motion Data on Android

[…] Mysk, a duo of app developers and security researchers, recently exposed Chrome’s shadiness on Twitter. In the tweet, Mysk brings to light that, by default, Chrome is sharing your phone’s motion data with the websites you visit. This is not cool.

Why you don’t want third parties accessing your motion data

To start with, this is—as I have pointed out—creepy af. The data comes from your phone’s accelerometer, the sensor responsible for tracking the device’s orientation and position. That sensor makes it possible to switch from portrait to landscape mode, as well as track you and your phone’s motion. For example, it empowers fitness apps to know how many steps you took, so long as you had your phone on you.

Since most of us keep our phones in our pocket or on our person, there is a lot of motion data generated on the device throughout the day. Google Chrome, by design, allows any website you click on to request that motion data, and hands it over with gusto. Researchers have found that these sites use accelerometer data to monitor ad interactions, check ad impressions, and to track your device (well, duh). Those first two, however, are infuriatingly sketchy; websites don’t just want to know if you’ll click on an ad or not, they want to know how you physically interact with these popups. Hey, why stop there? Why not tap into my camera and see what color shirt I’m wearing?

How to stop Chrome from sharing motion data with sites

Delete the app from your phone. Kidding. I know the vast majority of people on Android aren’t going to want to switch from Chrome to another mobile browser. That said, privacy-minded users might want to jump ship to something more reputable—like Firefox—and, if so, good for you.

But there are plenty of benefits to sticking with Chrome, especially on Android (considering the platform is also designed and operated by Google). If you don’t want to take the most drastic step, you can simply toggle a setting to block Google from sharing this data. As Mysk points out in their tweet, you can disable motion-data-sharing from Chrome’s settings.

Here’s how: Open the app, tap the three dots in the top-right corner, then choose “Settings.” Next, scroll down, tap “Site settings,” then “Motion sensors.” Turn off the toggle here to make sure no more third-party sites can ask for your motion data from here on out.

Source: How to Stop Chrome From Sharing Your Motion Data on Android

Microsoft will now snitch on you at work like never before

[…]

this news again comes courtesy of Microsoft’s roadmap service, where Redmond prepares you for the joys to come.

This time, there are a couple of joys.

The first is headlined: “Microsoft 365 compliance center: Insider risk management — Increased visibility on browsers.”

It all sounded wonderful until you those last four words, didn’t it? For this is the roadmap for administrators. And when you give a kindly administrator “increased visibility on browsers,” you can feel sure this means an elevated level of surveillance of what employees are typing into those browsers.

In this case, Microsoft is targeting “risky activity.” Which, presumably, has some sort of definition. It offers a link to its compliance center, where the very first sentence has whistleblower built in: “Web browsers are often used by users to access both sensitive and non-sensitive files within an organization.”

And what is the compliance center monitoring? Why, “files copied to personal cloud storage, files printed to local or network devices, files transferred or copied to a network share, files copied to USB devices.”

You always assumed this was the case? Perhaps. But now there will be mysteriously increased visibility.

“How might this visibility be increased?,” I hear you shudder. Well, there’s another little roadmap update that may, just may, offer a clue.

This one proclaims: “Microsoft 365 compliance center: Insider risk management — New ML detectors.”

Yes, your company will soon have extra-special robots to crawl along after you and observe your every “risky” action. It’s not enough to have increased visibility on browsers. You must also have Machine Learning constantly alert for someone revealing your lunch schedule.

Microsoft offers a link to its Insider Risk Management page. This enjoys some delicious phrasing: “Customers acknowledge insights related to the individual user’s behavior, character, or performance materially related to employment can be calculated by the administrator and made available to others in the organization.”

Yes, even your character is being examined here.

[…]

Source: Microsoft will now snitch on you at work like never before | ZDNet

UK Schools Normalizing Biometric Collection By Using Facial Recognition For Meal Payments

Subjecting students to surveillance tech is nothing new. Most schools have had cameras installed for years. Moving students from desks to laptops allows schools to monitor internet use, even when students aren’t on campus. Bringing police officers into schools to participate in disciplinary problems allows law enforcement agencies to utilize the same tech and analytics they deploy against the public at large. And if cameras are already in place, it’s often trivial to add facial recognition features.

The same tech that can keep kids from patronizing certain retailers is also being used to keep deadbeat kids from scoring free lunches. While some local governments in the United States are trying to limit the expansion of surveillance tech in their own jurisdictions, governments in the United Kingdom seem less concerned about the mission creep of surveillance technology.

Some students in the UK are now able to pay for their lunch in the school canteen using only their faces. Nine schools in North Ayrshire, Scotland, started taking payments using biometric information gleaned from facial recognition systems on Monday, according to the Financial Times. [alt link]

The technology is being provided by CRB Cunningham, which has installed a system that scans the faces of students and cross-checks them against encrypted faceprint templates stored locally on servers in the schools. It’s being brought in to replace fingerprint scanning and card payments, which have been deemed less safe since the advent of the COVID-19 pandemic.

According to the Financial Times report, 65 schools have already signed up to participate in this program, which has supposedly dropped transaction times at the lunchroom register to less than five seconds per student. I assume that’s an improvement, but it seems fingerprints/cards weren’t all that slow and there are plenty of options for touchless payment if schools need somewhere to spend their cafeteria tech money.

CRB says more than 97% of parents have consented to the collection and use of their children’s biometric info to… um… move kids through the lunch line faster. I guess the sooner you get kids used to having their faces scanned to do mundane things, the less likely they’ll be to complain when demands for info cross over into more private spaces.

The FAQ on the program makes it clear it’s a single-purpose collection governed by a number of laws and data collection policies. Parents can opt out at any time and all data is deleted after opt out or if the student leaves the school. It’s good this is being handled responsibly but, like all facial recognition tech, mistakes can (and will) be made. When these inevitably occur, hopefully the damage will be limited to a missed meal.

The FAQ handles questions specifically about this program. The other flyer published by the North Ayrshire Council explains nothing and implies facial recognition is harmless, accurate, and a positive addition to students’ lives.

We’re introducing Facial Recognition!

This new technology is now available for a contactless meal service!

Following this exciting announcement, the flyer moves on to discussing biometric collections and the tech that makes it all possible. It accomplishes this in seven short “land of contrasts” paragraphs that explain almost nothing and completely ignore the inherent flaws in these systems as well as the collateral damage misidentification can cause.

The section titled “The history of biometrics” contains no history. Instead, it says biometric collections are already omnipresent so why worry about paying for lunch with your face?

Whilst the use of biometric recognition has been steadily growing over the last decade or so, these past couple of years have seen an explosion in development, interest and vendor involvement, particularly in mobile devices where they are commonly used to verify the owner of the device before unlocking or making purchases.

If students want to learn more (or anything) about the history of biometrics, I guess they’ll need to do their own research. Because this is the next (and final) paragraph of the “history of biometrics” section:

We are delighted to offer this fast and secure identification technology to purchase our delicious and nutritious school meals

Time is a flattened circle, I guess. The history of biometrics is the present. And the present is the future of student payment options, of which there are several. But these schools have put their money on facial recognition, which will help them raise a generation of children who’ve never known a life where they weren’t expected to use their bodies to pay for stuff.

Source: UK Schools Normalizing Biometric Collection By Using Facial Recognition For Meal Payments | Techdirt

Nintendo Killed Emulation Sites Then Released Garbage N64 Games For The Switch

[…]

You will recall that a couple of years back, Nintendo opened up a new front on its constant IP wars by going after ROM and emulation sites. That caused plenty of sites to simply shut themselves down, but Nintendo also made a point of getting some scalps to hang on its belt, most famously in the form of RomUniverse. That site, which very clearly had infringing material not only on the site but promoted by the site’s ownership, got slapped around in the courts to the tune of a huge judgement against, which the site owners simply cannot pay.

But all of those are details and don’t answer the real question: why did Nintendo do this? Well, as many expected from the beginning, it did this because the company was planning to release a series of classic consoles, namely the NES mini and SNES mini. But, of course, what about later consoles? Such as the Nintendo 64?

Well, the answer to that is that Nintendo has offered a Nintendo Switch Online service uplift that includes some N64 games that you can play there instead.

After years of “N64 mini” rumors (which have yet to come to fruition), Nintendo announced plans to honor its first fully 3D gaming system late last month in the form of the Nintendo Switch Online Expansion Pack. Pay a bit extra, the company said, and you’d get a select library of N64 classics, emulated by the company that made them, on Switch consoles as part of an active NSO subscription.

One month later, however, Nintendo’s sales proposition grew more sour. That “bit extra” ballooned to $30 more per year, on top of the existing $20/year fee—a 150 percent jump in annual price. Never mind that the price also included an Animal Crossing expansion pack (which retro gaming fans may not want) and Sega Genesis games (which have been mostly released ad nauseam on every gaming system of the past decade). For many interested fans, that price jump was about the N64 collection.

So, a bit of a big price tag and a bunch of extras that are mostly besides the point from the perspective of the buyer. Buy, hey, at least Nintendo fans will finally get some N64 games to play on their Switch consoles, right?

Well, it turns out that Nintendo’s offering cannot come close to matching the quality of the very emulators and ROMs that Nintendo has worked so hard to disappear. The Ars Technica post linked above goes into excruciating details, some of which we’ll discuss for the purpose of giving examples, but here are the categories that Nintendo’s product does worse than an emulator on a PC.

 

  • Game options, such as visual settings for resolution to fit modern screens
  • Visuals, such as N64’s famous blur settings, and visual changes that expose outdated graphical sprites
  • Controller input lag
  • Controller configuration options
  • Multiplayer lag/stutter

 

If that seems like a lot of problems compared with emulators that have been around for quite a while, well, ding ding ding! We’ll get into some examples briefly below, but I’ll stipulate that none of the issues in the categories above are incredibly bad. But there are so many of them that they all add up to bad!

[….]

Source: Nintendo Killed Emulation Sites Then Released Garbage N64 Games For The Switch | Techdirt

NFI decrypts Tesla’s hidden driving data

[…] The Netherlands Forensic Institute (NFI) said it discovered a wealth of information about Tesla’s Autopilot, along with data around speed, accelerator pedal positions, steering wheel angle and more. The findings will allow the government to “request more targeted data” to help determine the cause of accidents, the investigators said.

The researchers already knew that Tesla vehicles encrypt and store accident related data, but not which data and how much. As such, they reverse-engineered the system and succeeded in “obtaining data from the models S, Y, X and 3,” which they described in a paper presented at an accident analysis conference.

[….]

With knowledge of how to decrypt the storage, the NFI carried out tests with a Tesla Model S so it could compare the logs with real-world data. It found that the vehicle logs were “very accurate,” with deviations less than 1 km/h (about 0.6 MPH).

[…]

It used to be possible to extract Autopilot data from Tesla EVs, but it’s now encrypted in recent models, the investigators said. Tesla encrypts data for good reason, they acknowledged, including protecting its own IP from other manufacturers and guarding a driver’s privacy. It also noted that the company does provide specific data to authorities and investigators if requested.

However, the team said that the extra data they extracted would allow for more detailed accident investigations, “especially into the role of driver assistance systems.” It added that it would be ideal to know if other manufacturers stored the same level of detail over long periods of time. “If we would know better which data car manufacturers all store, we can also make more targeted claims through the courts or the Public Prosecution Service,” said NFI investigator Frances Hoogendijk. “And ultimately that serves the interest of finding the truth after an accident.”

Source: The Dutch government claims it can decrypt Tesla’s hidden driving data | Engadget

Protecting your IP this way basically means things like not being able to use the data for legitimate reasons – such as investigating accidents – as well as halting advancements. This whole IP thing has gotten way out of hand to the detriment of the human race!

Also, this sounds like non-GDPR compliant data collection

5 notable Facebook fuckups in the recent relevations

The Facebook Papers are based on leaks from former Facebook staffer Frances Haugen and other inside sources. Haugen has appeared before US Congress, British Parliament, and given prominent television interviews. Among the allegations raised are that Facebook:

  • Knows that its algorithms lead users to extreme content and that it employs too few staff or contractors to curb such content, especially in languages other than English. Content glorifying violence and hate therefore spreads on Facebook – which really should know better by now after the The New York Times in 2018 reported that Myanmar’s military used Facebook to spread racist propaganda that led to mass violence against minority groups;
  • Enforces its rules selectively, allowing certain celebrities and websites to get away with behavior that would get others kicked off the platform. Inconsistent enforcement means users don’t get the protection from harmful content Facebook has so often promised, implying that it prioritises finding eyeballs for ads ahead of user safety;
  • Planned a special version of Instagram targeting teenagers, but cancelled it after Haugen revealed the site’s effects on some users – up to three per cent of teenage girls experience depression or anxiety, or self-harm, as a result of using the service;
  • Can’t accurately assess user numbers and may be missing users with multiple accounts. The Social Network™ may therefore have misrepresented its reach to advertisers, or made its advertising look more super-targeted than it really is – or both;
  • Just isn’t very good at spotting the kind of content it says has no place on its platform – like human trafficking – yes, that means selling human beings on Facebook. At one point Apple was so upset by the prevalence of Facebook posts of this sort it threatened to banish Zuckerberg’s software from the App Store.

Outlets including AP News and The Wall Street Journal have more original reporting on the leaks.

Source: Facebook labels recent revelations unfair • The Register

Google deliberately throttled ad load times to promote AMP, locking advertisers into it’s own advertising market place

More detail has emerged from a 173-page complaint filed last week in the lawsuit brought against Google by a number of US states, including allegations that Google deliberately throttled advertisements not served to its AMP (Accelerated Mobile) pages.

The lawsuit – as we explained at the end of last week – was originally filed in December 2020 and concerns alleged anti-competitive practice in digital advertising. The latest document, filed on Friday, makes fresh claims alleging ad-throttling around AMP.

Google introduced AMP in 2015, with the stated purpose of accelerating mobile web pages. An AMP page is a second version of a web page using AMP components and restricted JavaScript, and is usually served via Google’s content delivery network. Until 2018, the AMP project, although open source, had as part of its governance a BDFL (Benevolent Dictator for Life), this being Google’s Malte Ubl, the technical lead for AMP.

In 2018, Ubl posted that this changed “from a single Tech lead to a Technical Steering Committee”. The TSC sets its own membership and has a stated goal of “no more than 1/3 of the TSC from one employer”, though currently has nine members, of whom four are from Google, including operating director Joey Rozier.

According to the Friday court filing, representing the second amended complaint [PDF] from the plaintiffs, “Google ad server employees met with AMP employees to strategize about using AMP to impede header bidding.” Header bidding, as described in our earlier coverage, enabled publishers to offer ad space to multiple ad exchanges, rather than exclusively to Google’s ad exchange. The suit alleges that AMP limited the compatibility with header bidding to just “a few exchanges,” and “routed rival exchange bids through Google’s ad server so that Google could continue to peek at their bids and trade on inside information”.

The lawsuit also states that Google’s claims of faster performance for AMP pages “were not true for publishers that designed their web pages for speed”.

A more serious claim is that: “Google throttles the load time of non-AMP ads by giving them artificial one-second delays in order to give Google AMP a ‘nice comparative boost’. Throttling non-AMP ads slows down header bidding, which Google then uses to denigrate header bidding for being too slow.”

The document goes on to allege that: “Internally, Google employees grappled with ‘how to [publicly] justify [Google] making something slower’.”

Google promoted AMP in part by ranking non-AMP pages below AMP pages in search results, and featuring a “Search AMP Carousel” specifically for AMP content. This presented what the complaint claims was a “Faustian bargain,” where “(1) publishers who used header bidding would see the traffic to their site drop precipitously from Google suppressing their ranking in search and re-directing traffic to AMP-compatible publishers; or (2) publishers could adopt AMP pages to maintain traffic flow but forgo exchange competition in header bidding, which would make them more money on an impression-by-impression basis.”

The complaint further alleges that “According to Google’s internal documents, [publishers made] 40 per cent less revenue on AMP pages.”

A brief history of AMP

AMP was controversial from its first inception. In 2017 developer Jeremy Keith described AMP as deceptive, drawing defensive remarks from Ubl. Keith later joined the AMP advisory committee, but resigned in August saying that “I can’t in good faith continue to advise on the AMP project for the OpenJS Foundation when it has become clear to me that AMP remains a Google product, with only a subset of pieces that could even be considered open source.”

One complaint is that the AMP specification requires a link to Google-hosted JavaScript.

In May 2020 Google stated it would “remove the AMP requirement from Top Stories eligibility”.

This was confirmed in April 2021, when Google posted about an update to its “page experience” whereby “the Top Stories carousel feature on Google Search will be updated to include all news content, as long as it meets the Google News policies. This means that using the AMP format is no longer required.” In addition, “we will no longer show the AMP badge icon to indicate AMP content.” Finally, Google Search signed exchanges, which pre-fetches content to speed page rendering on sites which support the feature, was extended to all web pages where it was previously restricted to AMP pages.

This is evidence that Google is pulling back from its promotion of AMP, though it also said that “Google continues to support AMP”.

As for the complaint, it alleges that Google has an inherent conflict of interest. According to the filing: “Google was able to demand that it represent the buy-side (i.e., advertisers), where it extracted one fee, as well as the sell-side (i.e., publishers), where it extracted a second fee, and it was also able to force transactions to clear in its exchange, where it extracted a third, even larger, fee.”

The company also has more influence than any other on web standards, thanks to the dominant Chrome browser and Chromium browser engine, and on mobile technology, thanks to Android.

That Google would devise a standard from which it benefited is not surprising, but the allegation of deliberately delaying ads on other formats in order to promote it is disturbing and we have asked the company to comment.

Source: Google deliberately throttled ad load times to promote AMP, claims new court document • The Register

Monopolies eh!

UK government hands secret services cloud contract to AWS

The UK’s intelligence services are to store their secret files in the AWS cloud in a deal inked earlier this year, according to reports.

The GCHQ organisation (electrical/radio communications eavesdropping), MI5 (domestic UK intelligence matters), MI6 (external UK intel) and also the Ministry of Defence (MoD) will access their data in the cloud, albeit in UK-located AWS data centres.

The news was first reported in the Financial Times newspaper (paywall), which said GCHQ drove the deal that was signed earlier this year, and the data will be stored in a high-security way. It is claimed by unknown sources that AWS itself will not have access to the data.

Apparently the three agencies plus the MoD will be able to access information faster and share it more quickly when needed. This is presumably in contrast to each agency storing its own information on its own on-premises computer systems.

[…]

The US’s CIA signed a $600m AWS Cloud contract in 2013. That contract was upgraded in 2020 and involved AWS, Google, IBM, Microsoft and Oracle in a consortium.

Of course, for the US, AWS is a domestic firm. The French government is setting up its own sovereign public cloud called Bleu for sensitive government data. This “Cloud de Confiance” will be based on Microsoft’s Azure platform – and will include Microsoft 365 – but will apparently be “delivered via an independent environment” that has “immunity from all extraterritorial legislation and economic independence” from within an “isolated infrastructure that uses data centres located in France.”

In GCHQ’s reported view, no UK-based public cloud could provide the scale or capabilities needed for the security services data storage requirements.

[….]

Source: UK government hands secret services cloud contract to AWS • The Register