John Deere signs right to repair agreement

As farming has become more technology-driven, Deere has increasingly injected software into its products with all of its tractors and harvesters now including an autopilot feature as standard.

There is also the John Deere Operations Center, which “instantly captures vital operational data to boost transparency and increase productivity for your business.”

Within a matter of years, the company envisages having 1.5 million machines and half a billion acres of land connected to the cloud service, which will “collect and store crop data, including millions of images of weeds that can be targeted by herbicide.”

Deere also estimates that software fees will make up 10 percent of the company’s revenues by the end of the decade, with Bernstein analysts pegging the average gross margin for farming software at 85 percent, compared to 25 percent for equipment sales.

Just like other commercial software vendors, however, Deere exercises close control and restricts what can be done with its products. This led farm labor advocacy groups to file a complaint to the US Federal Trade Commission last year, claiming that Deere unlawfully refused to provide the software and technical data necessary to repair its machinery.

“Deere is the dominant force in the $68 billion US agricultural equipment market, controlling over 50 per cent of the market for large tractors and combines,” said Fairmark Partners, the groups’ attorneys, in a preface to the complaint [PDF].

“For many farmers and ranchers, they effectively have no choice but to purchase their equipment from Deere. Not satisfied with dominating just the market for equipment, Deere has sought to leverage its power in that market to monopolize the market for repairs of that equipment, to the detriment of farmers, ranchers, and independent repair providers.”

[…]

The MoU, which can be read here [PDF], was signed yesterday at the 2023 AFBF Convention in San Juan, Puerto Rico, and seems to be a commitment by Deere to improve farmers’ access and choice when it comes to repairs.

[…]

Duvall said on a podcast about the matter that the MoU is the result of several years’ work. “As you use equipment, we all know at some point in time, there’s going to be problems with it. And we did have problems with having the opportunity to repair our equipment where we wanted to, or even repair it on the farm,” he added.

“It ensures that our farmers can repair their equipment and have access to the diagnostic tools and product guides so that they can find the problems and find solutions for them. And this is the beginning of a process that we think is going to be real healthy for our farmers and for the company because what it does is it sets up an opportunity for our farmers to really work with John Deere on a personal basis.”

[…]

Source: John Deere signs right to repair agreement • The Register

But… still gives John Deere access to their data for free?

This may also have something to do with the security of John Deere machines being so incredibly piss poor, mainly due to really bad update hygiene

Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

A startup says it has begun releasing sulfur particles into Earth’s atmosphere, in a controversial attempt to combat climate change by deflecting sunlight. Make Sunsets, a company that sells carbon offset “cooling credits” for $10 each, is banking on solar geoengineering to cool down the planet and fill its coffers. The startup claims it has already released two test balloons, each filled with about 10 grams of sulfur particles and intended for the stratosphere, according to the company’s website and first reported on by MIT Technology Review.

The concept of solar geoengineering is simple: Add reflective particles to the upper atmosphere to reduce the amount of sunlight that penetrates from space, thereby cooling Earth. It’s an idea inspired by the atmospheric side effects of major volcanic eruptions, which have led to drastic, temporary climate shifts multiple times throughout history, including the notorious “year without a summer” of 1816.

Yet effective and safe implementation of the idea is much less simple. Scientists and engineers have been studying solar geoengineering as a potential climate change remedy for more than 50 years. But almost nobody has actually enacted real-world experiments because of the associated risks, like rapid changes in our planet’s precipitation patterns, damage to the ozone layer, and significant geopolitical ramifications.

[…]

if and when we get enough sulfur into the atmosphere to meaningfully cool Earth, we’d have to keep adding new particles indefinitely to avoid entering an era of climate change about four to six times worse than what we’re currently experiencing, according to one 2018 study. Sulfur aerosols don’t stick around very long. Their lifespan in the stratosphere is somewhere between a few days and a couple years, depending on particle size and other factors.

[…]

Rogue agents independently deciding to impose geoengineering on the rest of us has been a concern for as long as the thought of intentionally manipulating the atmosphere has been around. The Pentagon even has dedicated research teams working on methods to detect and combat such clandestine attempts. But effectively defending against solar geoengineering is much more difficult than just doing it.

In Iseman’s rudimentary first trials, he says he released two weather balloons full of helium and sulfur aerosols somewhere in Baja California, Mexico. The founder told MIT Technology Review that the balloons rose toward the sky but, beyond that, he doesn’t know what happened to them, as the balloons lacked tracking equipment. Maybe they made it to the stratosphere and released their payload, maybe they didn’t.

[…]

Iseman and Make Sunsets claim that a single gram of sulfur aerosols counteracts the warming effects of one ton of CO2. But there is no clear scientific basis for such an assertion, geoengineering researcher Shuchi Talati told the outlet. And so the $10 “cooling credits” the company is hawking are likely bunk (along with most carbon credit/offset schemes.)

Even if the balloons made it to the stratosphere, the small amount of sulfur released wouldn’t be enough to trigger significant environmental effects, said David Keith to MIT Technology Review.

[…]

The solution to climate change is almost certainly not a single maverick “disrupting” the composition of Earth’s stratosphere. But that hasn’t stopped Make Sunsets from reportedly raising nearly $750,000 in funds from venture capital firms. And for just ~$29,250,000 more per year, the company claims it can completely offset current warming. It’s not a bet we recommend taking.

Source: Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services

For the last thirteen years the Free Software Foundation has published its Ethical Tech Giving Guide. But what’s interesting is this year’s guide also tags companies and products with negative recommendations to “stay away from.” Stay away from: iPhones
It’s not just Siri that’s creepy: all Apple devices contain software that’s hostile to users. Although they claim to be concerned about user privacy, they don’t hesitate to put their users under surveillance.

Apple prevents you from installing third-party free software on your own phone, and they use this control to censor apps that compete with or subvert Apple’s profits.

Apple has a history of exploiting their absolute control over their users to silence political activists and help governments spy on millions of users.

Stay away from: M1 MacBook and MacBook Pro
macOS is proprietary software that restricts its users’ freedoms.

In November 2020, macOS was caught alerting Apple each time a user opens an app. Even though Apple is making changes to the service, it just goes to show how bad they try to be until there is an outcry.

Comes crawling with spyware that rats you out to advertisers.

Stay away from: Amazon
Amazon is one of the most notorious DRM offenders. They use this Orwellian control over their devices and services to spy on users and keep them trapped in their walled garden.

Be aware that Amazon isn’t the peddler of ebook DRM. Disturbingly, it’s enthusiastically supported by most of the big publishing houses.

Read more about the dangers of DRM through our Defective by Design campaign.

Stay away from: Spotify, Apple Music, and all other major streaming services
In addition to streaming music encumbered by DRM, people who want to use Spotify are required to install additional proprietary software. Even Spotify’s client for GNU/Linux relies on proprietary software.

Apple Music is no better, and places heavy restrictions on the music streamed through the platform.

Stay away from: Netflix
Netflix is continuing its disturbing trend of making onerous DRM the norm for streaming media. That’s why they were a target for last year’s International Day Against DRM (IDAD).

They’re also leveraging their place in the Motion Picture Association of America (MPAA) to advocate for tighter restrictions on users, and drove the effort to embed DRM into the fabric of the Web.

“In your gift giving this year, put freedom first,” their guide begins.

And for a freedom-respecting last-minute gift idea, they suggest giving the gift of a FSF membership (which comes with a code and a printable page “so that you can present your gift as a physical object, if you like.”) The membership is valid for one year, and includes the many benefits that come with an FSF associate membership, including a USB member card, email forwarding, access to our Jitsi Meet videoconferencing server and member forum, discounts in the FSF shop and on ThinkPenguin hardware, and more.

If you are in the United States, your gift would also be fully tax-deductible in the USA.

Source: FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services – Slashdot

Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – finally. What lawsuits lie in wait?

The version of the iconic character from “Steamboat Willie” will enter the public domain in 2024. But those trying to take advantage could end up in a legal mousetrap. From a report: There is nothing soft and cuddly about the way Disney protects the characters it brings to life. This is a company that once forced a Florida day care center to remove an unauthorized Minnie Mouse mural. In 2006, Disney told a stonemason that carving Winnie the Pooh into a child’s gravestone would violate its copyright. The company pushed so hard for an extension of copyright protections in 1998 that the result was derisively nicknamed the Mickey Mouse Protection Act. For the first time, however, one of Disney’s marquee characters — Mickey himself — is set to enter the public domain.

“Steamboat Willie,” the 1928 short film that introduced Mickey to the world, will lose copyright protection in the United States and a few other countries at the end of next year, prompting fans, copyright experts and potential Mickey grabbers to wonder: How is the notoriously litigious Disney going to respond? “I’m seeing in Reddit forums and on Twitter where people — creative types — are getting excited about the possibilities, that somehow it’s going to be open season on Mickey,” said Aaron J. Moss, a partner at Greenberg Glusker in Los Angeles who specializes in copyright and trademark law. “But that is a misunderstanding of what is happening with the copyright.” The matter is more complicated than it appears, and those who try to capitalize on the expiring “Steamboat Willie” copyright could easily end up in a legal mousetrap. “The question is where Disney tries to draw the line on enforcement,” Mr. Moss said, “and if courts get involved to draw that line judicially.”

Only one copyright is expiring. It covers the original version of Mickey Mouse as seen in “Steamboat Willie,” an eight-minute short with little plot. This nonspeaking Mickey has a rat-like nose, rudimentary eyes (no pupils) and a long tail. He can be naughty. In one “Steamboat Willie” scene, he torments a cat. In another, he uses a terrified goose as a trombone. Later versions of the character remain protected by copyrights, including the sweeter, rounder Mickey with red shorts and white gloves most familiar to audiences today. They will enter the public domain at different points over the coming decades. “Disney has regularly modernized the character, not necessarily as a program of copyright management, at least initially, but to keep up with the times,” said Jane C. Ginsburg, an authority on intellectual property law who teaches at Columbia University.

Source: Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – Slashdot

How it’s remotely possible that a company is capitalising on a thought someone had around 100 years ago is beyond me.

Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit

It’s been four years since Facebook became embroiled in its biggest scandal to date: Cambridge Analytica. In addition to paying $4.9 billion to the Federal Trade Commission in a settlement, the social network has just agreed to pay $725 million to settle a long-running class-action lawsuit, making it the biggest settlement ever in a privacy case.

To recap, a whistleblower revealed in 2018 that now-defunct British political consulting firm Cambridge Analytica harvested the personal data of almost 90 million users without their consent for targeted political ads during the 2016 US presidential campaign and the UK’s Brexit referendum.

The controversy led to Mark Zuckerberg testifying before congress, a $4.9 billion fine levied on the company by the FTC in July 2019, and a $100 million settlement with the US Securities and Exchange Commission. There was also a class-action lawsuit filed in 2018 on behalf of Facebook users who alleged the company violated consumer privacy laws by sharing private data with other firms.

Facebook parent Meta settled the class action in August, thereby ensuring CEO Mark Zuckerberg, chief operating officer Javier Oliva, and former COO Sheryl Sandberg avoided hours of questioning from lawyers while under oath

[…]

This doesn’t mark the end of Meta’s dealings with the Cambridge Analytica fallout. Zuckerberg is facing a lawsuit from Washington DC’s attorney general Karl A. Racine over allegations that the Meta boss was personally involved in failures that led to the incident and his “policies enabled a multi-year effort to mislead users about the extent of Facebook’s wrongful conduct.”

Source: Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit | TechSpot

The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular

ou may have noticed the world getting excited about the capabilities of ChatGPT, a text-based AI chat bot. Similarly, some are getting quite worked up over generative AI systems that can turn text prompts into images, including those mimicking the style of particular artists. But less remarked upon is the use of AI in the world of music. Music Business Worldwide has written two detailed news stories on the topic. The first comes from China:

Tencent Music Entertainment (TME) says that it has created and released over 1,000 tracks containing vocals created by AI tech that mimics the human voice.

And get this: one of these tracks has already surpassed 100 million streams.

Some of these songs use synthetic voices based on human singers, both dead and alive:

TME also confirmed today (November 15) that – in addition to “paying tribute” to the vocals of dead artists via the Lingyin Engine – it has also created “an AI singer lineup with the voices of trending [i.e currently active] stars such as Yang Chaoyue, among others”.

The copyright industry will doubtless have something to say about that. It is also unlikely to be delighted by the second Music Business Worldwide story about AI-generated music, this time in the Middle East and North Africa (MENA) market:

MENA-focused Spotify rival, Anghami, is now taking the concept to a whole other level – claiming that it will soon become the first platform to host over 200,000 songs generated by AI.

Anghami has partnered with a generative music platform called Mubert, which says it allows users to create “unique soundtracks” for various uses such as social media, presentations or films using one million samples from over 4,000 musicians.

According to Mohammed Ogaily, VP Product at Anghami, the service has already “generated over 170,000 songs, based on three sets of lyrics, three talents, and 2,000 tracks generated by AI”.

It’s striking that the undoubtedly interesting but theoretical possibilities of ChatGPT and generative AI art are dominating the headlines, while we hear relatively little about these AI-based music services that are already up and running, and hugely popular with listeners. It’s probably a result of the generally parochial nature of mainstream Western media, which often ignores the important developments happening elsewhere.

Source: The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular | Techdirt

AI-Created Comic Has Copyright Protection Revoked by US

The United States Copyright Office (USCO) reversed an earlier decision to grant a copyright to a comic book that was created using “A.I. art,” and announced that the copyright protection on the comic book will be revoked, stating that copyrighted works must be created by humans to gain official copyright protection.

In September, Kris Kashtanova announced that they had received a U.S. copyright on his comic book, Zarya of the Dawn, a comic book inspired by their late grandmother that she created with the text-to-image engine Midjourney. Kashtanova referred to herself as a “prompt engineer” and explained at the time that she went to get the copyright so that she could “make a case that we do own copyright when we make something using AI.”

[…]

Source: AI-Created Comic Has Been Deemed Ineligible for Copyright Protection

I guess there is no big corporate interest in lobbying for AI created content – yet – and so the copyright masters have no idea what to do without their corporate cash carrying masters telling them what to do.

Epic Forced To Pay $520 Million Fine over Fortnite Privacy and Dark Patterns

Fortnite-maker Epic Games has agreed to pay a massive $520 million fine in settlements with the Federal Trade Commission for allegedly illegally gathering data from children and deploying dark patterns techniques to manipulate users into making unwanted in-game purchases. The fines mark a major regulatory win for the Biden administration’s progressive-minded FTC, who, up until now, had largely failed to deliver on its promise of more robust reinforcement of U.S. tech companies.

The first $275 million fine will settle allegations Epic collected personal information from children under the age of 13 without their parent’s consent when they played the hugely popular battle royale game. The FTC claims that unjustified data collection violates the Children’s Online Privacy Protection Act. Internal Epic surveys and the licensing of Fortnite branded toys, the FTC alleges, show Epic clearly knew at least some of its player base was underage. Worse still, the agency claims Epic forced parents to wade through cumbersome barriers when they requested to have their children’s data deleted.

[…]

The game-maker additionally agreed to pay $245 million to refund customers who the FTC says fell victim to manipulative, unfair billing practices that fall under the category, “dark patterns.” Fortnite allegedly deployed a, “counterintuitive, inconsistent, and confusing button configuration,” that led players to incur unwanted charges with a single press of a button. In some cases, the FTC claims that single press button meant users were charged while sitting in a loading screen or while trying to wake the game from sleep mode. Users, the complaint alleges, collectively lost hundreds of millions of dollars to those shady practices. Epic allegedly “ignored more than one million user complaints,” suggesting a high number of users were being wrongly charged.

[…]

And though the FTC’s latest fine is far cry from the $5 billion penalty the agency issued against Facebook in 2019 and represents just a portion of the billions Fortnite reportedly rakes in each year, supporters said it nonetheless represents more than a mere slap on the wrist.

[…]

Source: Epic Forced To Pay Record-Breaking $520 Million Fine

China’s Setting the Standard for Deepfake Regulation

[…]

On January 10, according to The South China Morning Post, China’s Cyberspace Administration will implement new rules that are intended to protect people from having their voice or image digitally impersonated without their consent. The regulators refer to platforms and services using the technology to edit a person’s voice or image as, “deep synthesis providers.”

Those deep synthesis technologies could include the use of deep learning algorithms and augmented reality to generate text, audio, images or video. We’ve already seen numerous instances over the years of these technologies used to impersonate high profile individuals, ranging from celebrities and tech executives to political figures.

Under the new guidelines, companies and technologists who use the technology must first contact and receive the consent from individuals before they edit their voice or image. The rules, officially called The Administrative Provisions on Deep Synthesis for Internet Information Services come in response to governmental concerns that advances in AI tech could be used by bad actors to run scams or defame people by impersonating their identity. In presenting the guidelines, the regulators also acknowledge areas where these technologies could prove useful. Rather than impose a wholesale ban, the regulator says it would actually promote the tech’s legal use and, “provide powerful legal protection to ensure and facilitate,” its development.

But, like many of China’s proposed tech policies, political considerations are inseparable. According to the South China Morning Post, news stories reposted using the technology must come from a government approved list of news outlets. Similarly, the rules require all so-called deep synthesis providers adhere to local laws and maintain “correct political direction and correct public opinion orientation.” Correct here, of course, is determined unilaterally by the state.

Though certain U.S states like New Jersey and Illinois have introduced local privacy legislation that addresses deepfakes, the lack of any meaningful federal privacy laws limits regulators’ abilities to address the tech on a national level. In the private sector, major U.S. platforms like Facebook and Twitter have created new systems meant to detect and flag deepfakes, though they are constantly trying to stay one step ahead of bad actors continually looking for ways to evade those filters.

If China’s new rules are successful, it could lay down a policy framework other nations could build upon and adapt. It wouldn’t be the first time China’s led the pack on strict tech reform. Last year, China introduced sweeping new data privacy laws that radically limited the ways private companies could collect an individual’s personal identity. Those rules were built off of Europe’s General Data Protection Regulation

[…]

That all sounds great, but China’s privacy laws have one glaring loophole tucked within it. Though the law protects people from private companies feeding off their data, it does almost nothing to prevent those same harms being carried out by the government. Similarly, with deepfakes, it’s unclear how the newly proposed regulations would, for instance, prohibit a state-run agency from doctoring or manipulating certain text or audio to influence the narrative around controversial or sensitive political events.

Source: China’s Setting the Standard for Deepfake Regulation

China is also the one setting the bar for anti-monopolistic practices, the EU and US have been caught with their fingers in the jam jar and their pants down.

Google must delete search results about you if they’re fake, EU court rules

People in Europe can get Google to delete search results about them if they prove the information is “manifestly inaccurate,” the EU’s top court ruled Thursday.

The case kicked off when two investment managers requested Google to dereference results of a search made on the basis of their names, which provided links to certain articles criticising that group’s investment model. They say those articles contain inaccurate claims.

Google refused to comply, arguing that it was unaware whether the information contained in the articles was accurate or not.

But in a ruling Thursday, the Court of Justice of the European Union opened the door to the investment managers being able to successfully trigger the so-called “right to be forgotten” under the EU’s General Data Protection Regulation.

“The right to freedom of expression and information cannot be taken into account where, at the very least, a part – which is not of minor importance – of the information found in the referenced content proves to be inaccurate,” the court said in a press release accompanying the ruling.

People who want to scrub inaccurate results from search engines have to provide sufficient proof that what is said about them is false. But it doesn’t have to come from a court case against a publisher, for instance. They have “to provide only evidence that can reasonably be required of [them] to try to find,” the court said.

[…]

Source: Google must delete search results about you if they’re fake, EU court rules – POLITICO

Telegram is auctioning phone numbers to let users sign up to the service without any SIM

After putting unique usernames on the auction on the TON blockchain, Telegram is now putting anonymous numbers up for bidding. These numbers could be used to sign up for Telegram without needing any SIM card.

Just like the username auction, you can buy these virtual numbers on Fragment, which is a site specially created for Telegram-related auctions. To buy a number, you will have to link your TON wallet (Tonkeeper) to the website.

You can buy a random number for as low as 9 toncoins, which is equivalent to roughly $16.50 at the time of writing. Some of the premium virtual numbers — such as +888-8-888 — are selling for 31,500 toncoins (~$58,200).

Notably, you can only use this number to sign up for Telegram. You can’t use it to receive SMS or calls or use it to register for another service.

For Telegram, this is another way of asking its most loyal supporters to support the app by helping it make some money. The company launched its premium subscription plan earlier this year. On Tuesday, the chat app’s founder Pavel Durov said that Telegram has more than 1 million paid users just a few months after the launch of its premium features. While Telegram offers features like cross-device sync and large groups, it’s important to remember that chats are not protected by end-to-end encryption.

As for folks who want anonymization, Telegram already offers you to hide your phone number. Alternatively, there are tons of virtual phone number services out there — including Google Voice, Hushed, and India-based Doosra — that allow you receive calls and SMS as well.

Source: Telegram is auctioning phone numbers to let users sign up to the service without any SIM

Grad Students Analyze, Hack, and Remove Under-Desk Surveillance Devices Designed to Track Them – at  a privacy institute!

[…]

graduate students at Northeastern University were able to organize and beat back an attempt at introducing invasive surveillance devices that were quietly placed under desks at their school.

Early in October, Senior Vice Provost David Luzzi installed motion sensors under all the desks at the school’s Interdisciplinary Science & Engineering Complex (ISEC), a facility used by graduate students and home to the “Cybersecurity and Privacy Institute” which studies surveillance. These sensors were installed at night—without student knowledge or consent—and when pressed for an explanation, students were told this was part of a study on “desk usage,” according to a blog post by Max von Hippel, a Privacy Institute PhD candidate who wrote about the situation for the Tech Workers Coalition’s newsletter.

[…]

In response, students began to raise concerns about the sensors, and an email was sent out by Luzzi attempting to address issues raised by students.

[…]

“The results will be used to develop best practices for assigning desks and seating within ISEC (and EXP in due course).”

To that end, Luzzi wrote, the university had deployed “a Spaceti occupancy monitoring system” that would use heat sensors at groin level to “aggregate data by subzones to generate when a desk is occupied or not.” Luzzi added that the data would be anonymized, aggregated to look at “themes” and not individual time at assigned desks, not be used in evaluations, and not shared with any supervisors of the students. Following that email, an impromptu listening session was held in the ISEC.

At this first listening session, Luzzi asked that grad student attendees “trust the university since you trust them to give you a degree,” Luzzi also maintained that “we are not doing any science here” as another defense of the decision to not seek IRB approval.

“He just showed up. We’re all working, we have paper deadlines and all sorts of work to do. So he didn’t tell us he was coming, showed up demanding an audience, and a bunch of students spoke with him,”

[…]

After that, the students at the Privacy Institute, which specialize in studying surveillance and reversing its harm, started removing the sensors, hacking into them, and working on an open source guide so other students could do the same. Luzzi had claimed the devices were secure and the data encrypted, but Privacy Institute students learned they were relatively insecure and unencrypted.

[…]

After hacking the devices, students wrote an open letter to Luzzi and university president Joseph E. Aoun asking for the sensors to be removed because they were intimidating, part of a poorly conceived study, and deployed without IRB approval even though human subjects were at the center of the so-called study.

“Resident in ISEC is the Cybersecurity and Privacy Institute, one of the world’s leading groups studying privacy and tracking, with a particular focus on IoT devices,” the letter reads. “To deploy an under-desk tracking system to the very researchers who regularly expose the perils of these technologies is, at best, an extremely poor look for a university that routinely touts these researchers’ accomplishments.

[…]

Another listening session followed, this time for professors only, and where Luzzi claimed the devices were not subject to IRB approval because “they don’t sense humans in particular – they sense any heat source.” More sensors were removed afterwards and put into a “public art piece” in the building lobby spelling out NO!

[…]

Afterwards, von Hippel took to Twitter and shares what becomes a semi-viral thread documenting the entire timeline of events from the secret installation of the sensors to the listening session occurring that day. Hours later, the sensors are removed

[…]

This was a particularly instructive episode because it shows that surveillance need not be permanent—that it can be rooted out by the people affected by it, together.

[…]

“The most powerful tool at the disposal of graduate students is the ability to strike. Fundamentally, the university runs on graduate students.

[…]

“The computer science department was able to organize quickly because almost everybody is a union member, has signed a card, and are all networked together via the union. As soon as this happened, we communicated over union channels.

[…]

This sort of rapid response is key, especially as more and more systems adopt sensors for increasingly spurious or concerning reasons. Sensors have been rolled out at other universities like Carnegie Mellon University, as well as public school systems. They’ve seen use in more militarized and carceral settings such as the US-Mexico border or within America’s prison system.

These rollouts are part of what Cory Doctrow calls the “shitty technology adoption curve” whereby horrible, unethical and immoral technologies are normalized and rationalized by being deployed on vulnerable populations for constantly shifting reasons. You start with people whose concerns can be ignored—migrants, prisoners, homeless populations—then scale it upwards—children in school, contractors, un-unionized workers. By the time it gets to people whose concerns and objections would be the loudest and most integral to its rejection, the technology has already been widely deployed.

[…]

Source: ‘NO’: Grad Students Analyze, Hack, and Remove Under-Desk Surveillance Devices Designed to Track Them

As US, UK Embrace ‘Age Verify Everyone!’ French Data Protection Agency Says Age Verification Is Unreliable And Violates Privacy Rights

[…]

We’ve already spent many, many words explaining how age verification technology is inherently dangerous and actually puts children at greater risk. Not to mention it’s a privacy nightmare that normalizes the idea of mass surveillance, especially for children.

But, why take our word for it?

The French data protection agency, CNIL, has declared that no age verification technology in existence can be deemed as safe and not dangerous to privacy rights.

Now, there are many things that I disagree with CNIL about, especially its views that the censorial “right to be forgotten in the EU” should be applied globally. But one thing we likely agree on is that CNIL does not fuck around when it comes to data protection stuff. CNIL is generally seen as the most aggressive and most thorough in its data protection/data privacy work. Being on the wrong side of CNIL is a dangerous place for any company to be.

So I’d take it seriously when CNIL effectively notes that all age verification is a privacy nightmare, especially for children:

The CNIL has analysed several existing solutions for online age verification, checking whether they have the following properties: sufficiently reliable verification, complete coverage of the population and respect for the protection of individuals’ data and privacy and their security.

The CNIL finds that there is currently no solution that satisfactorily meets these three requirements.

Basically, CNIL found that all existing age verification techniques are unreliable, easily bypassed, and are horrible regarding privacy.

Despite this, CNIL seems oddly optimistic that just by nerding harder, perhaps future solutions will magically work. However, it does go through the weaknesses and problems of the various offerings being pushed today as solutions. For example, you may recall that when I called out the dangers of the age verification in California’s Age Appropriate Design Code, a trade group representing age verification companies reached out to me to let me know there was nothing to worry about, because they’d just scan everyone’s faces to visit websites. CNIL points out some, um, issues with this:

The use of such systems, because of their intrusive aspect (access to the camera on the user’s device during an initial enrolment with a third party, or a one-off verification by the same third party, which may be the source of blackmail via the webcam when accessing a pornographic site is requested), as well as because of the margin of error inherent in any statistical evaluation, should imperatively be conditional upon compliance with operating, reliability and performance standards. Such requirements should be independently verified.

This type of method must also be implemented by a trusted third party respecting precise specifications, particularly concerning access to pornographic sites. Thus, an age estimate performed locally on the user’s terminal should be preferred in order to minimise the risk of data leakage. In the absence of such a framework, this method should not be deployed.

Every other verification technique seems to similarly raise questions about effectiveness and how protective (or, well, how not protective it is of privacy rights).

So… why isn’t this raising alarm bells among the various legislatures and children’s advocates (many of whom also claim to be privacy advocates) who are pushing for these laws?

Source: As US, UK Embrace ‘Age Verify Everyone!’ French Data Protection Agency Says Age Verification Is Unreliable And Violates Privacy Rights | Techdirt

Players are boycotting Nintendo and Panda events in the wake of Smash Bros tournaments being instacanceled by Nintendo

n the wake of Nintendo being Nintendo and unceremoniously canceling the Smash World Tour, one of the year’s biggest esports tournaments dedicated to all things Super Smash Bros., copious folks in the game’s community have come out in protest. Casual fans, pro players, long-time commentators, and even other tournament organizers, from AITX eSports to Beyond the Summit, have all publicly denounced not just Nintendo for its asinine decision but also Panda Global for allegedly causing the Smash World Tour to get shut down. Now, it appears many of those people are boycotting all of Nintendo’s officially licensed tournaments as well.

[…]

Super Smash Bros. fans aren’t happy about what’s going on, with many posting their frustrations on Twitter. Some pointed fingers at Panda Global CEO and co-founder Dr. Alan Bunney for allegedly trying to recruit tournaments to the Panda Cup by threatening to get Nintendo involved to shut the Smash World Tour down and reportedly attempting to create a monopoly by requesting exclusive streaming rights to the Panda Cup. Others fear this may hurt their careers and livelihoods. The main consensus is to never watch, support, or attend a Panda Global event ever again. A lot of people seem to feel this way.

[…]

The future of Super Smash Bros.’s competitive fighting game scene is looking quite precarious, with Video Game Boot Camp admitting in the statement that it’s “currently navigating budget cuts, internal communications with our team and partners, commitments/contracts, as well as sponsorship negotiations that will inevitably be affected by all of this.” It’s possible that smaller tournaments will continue without Nintendo’s blessing, but, as has been done time and again, it’s likely only a matter of time until Nintendo comes a-knocking.

[…]

Source: Smash Bros. Fans Are Totally Done With Nintendo And Tournaments

The article says that Smash Bros tournaments were cancelled due to Nintendo not sponsoring them, but the tournaments were cancelled due to Nintendo throwing cease and desist letters at the organisers. Also see: Nintendo Shuts Down Smash World Tour – worlds largest e-sports tournament – out of the blue

Telegram shares users’ data in copyright violation lawsuit to Indian court

Telegram has disclosed names of administrators, their phone numbers and IP addresses of channels accused of copyright infringement in compliance with a court order in India in a remarkable illustration of the data the instant messaging platform stores on its users and can be made to disclose by authorities.

The app operator was forced by a Delhi High Court order to share the data after a teacher sued the firm for not doing enough to prevent unauthorised distribution of her course material on the platform. Neetu Singh, the plaintiff teacher, said a number of Telegram channels were re-selling her study materials at discounted prices without permission.

An Indian court earlier had ordered Telegram to adhere to the Indian law and disclose details about those operating such channels.

Telegram unsuccessfully argued that disclosing user information would violate the privacy policy and the laws of Singapore, where it has located its physical servers for storing users’ data. In response, the Indian court said the copyright owners couldn’t be left “completely remediless against the actual infringers” because Telegram has chosen to locate its servers outside the country.

In an order last week, Justice Prathiba Singh said Telegram had complied with the earlier order and shared the data.

“Let copy of the said data be supplied to Id. Counsel for plaintiffs with the clear direction that neither the plaintiffs nor their counsel shall disclose the said data to any third party, except for the purposes of the present proceedings. To this end, disclosure to the governmental authorities/police is permissible,” said the court (PDF) and first reported by LiveLaw.

[…]

Source: Telegram shares users’ data in copyright violation lawsuit | TechCrunch

Eufy Cameras Have Been Uploading Unencrypted Face Footage to Cloud

Eufy, the company behind a series of affordable security cameras I’ve previously suggested over the expensive stuff, is currently in a bit of hot water for its security practices. The company, owned by Anker, purports its products to be one of the few security devices that allow for locally-stored media and don’t need a cloud account to work efficiently. But over the turkey-eating holiday, a noted security researcher across the pond discovered a security hole in Eufy’s mobile app that threatens that whole premise.

Paul Moore relayed the issue in a tweeted screengrab. Moore had purchased the Eufy Doorbell Dual Camera for its promise of a local storage option, only to discover that the doorbell’s cameras had been storing thumbnails of faces on the cloud, along with identifiable user information, despite Moore not even having a Eufy Cloud Storage account.

After Moore tweeted the findings, another user found that the data uploaded to Eufy wasn’t even encrypted. Any uploaded clips could be easily played back on any desktop media player, which Moore later demonstrated. What’s more: thumbnails and clips were linked to their partner cameras, offering additional identifiable information to any digital snoopers sniffing around.

Android Central was able to recreate the issue on its own with a EufyCam 3. It then reached out to Eufy, which explained to the site why this issue was cropping up. If you choose to have a motion notification pushed out with an attached thumbnail, Eufy temporarily uploads that file to its AWS servers to send it out.

[…]

Unfortunately, this isn’t the first time Eufy has had an issue regarding security on its cameras. Last year, the company faced similar reports of “unwarranted access” to random camera feeds, though the company quickly fixed the issue once it was discovered. Eufy is no stranger to patching things up.

Source: Eufy Cameras Have Been Uploading Unencrypted Footage to Cloud

Why first upload these images to AWS instead of directly mailing them?!

Nintendo Shuts Down Smash World Tour – worlds largest e-sports tournament – out of the blue

The organisers of the Smash World Tour have today announced that they are being shut down after Nintendo, “without any warning”, told them they could “no longer operate”.

The Tour, which is run by a third party (since Nintendo has been so traditionally bad at this), had grown over the years to become one of the biggest in the esports and fighting game scene. As the SWT team say:

In 2022 alone, we connected over 6,400 live events worldwide, with over 325,000 in-person entrants, making the Smash World Tour (SWT, or the Tour) the largest esports tour in history, for any game title. The Championships would also have had the largest prize pool in Smash history at over $250,000. The 2023 Smash World Tour planned to have a prize pool of over $350,000.

That’s all toast, though, because organisers now say “Without any warning, we received notice the night before Thanksgiving from Nintendo that we could no longer operate”. While Nintendo has yet to comment—we’ve reached out to the company (UPDATE: see comment at bottom of post)—Nintendo recently teamed up with Panda to run a series of competing, officially-licensed Smash events.

While this will be a disappointment to SWT’s organisers, fans and players, it has also placed the team in a huge financial hole, since so many bookings and plans for the events had already been made. As they say in the cancellation announcement:

We don’t know where everything will land quite yet with contracts, sponsor obligations, etc — in short, we will be losing hundreds of thousands of dollars due to Nintendo’s actions. That being said, we are taking steps to remedy many issues that have arisen from canceling the upcoming Smash World Tour Championships — Especially for the players. Please keep an eye out in the coming days for help with travel arrangements. Given the timeline that we were forced into, we had to publish this statement before we could iron out all of the details. All attendees will be issued full refunds.

The move blindsided the SWT team who had believed, after years of friction, they were starting to make some progress with Nintendo:

In November 2021, after the Panda Cup was first announced, Nintendo contacted us to jump on a call with a few folks on their team, including a representative from their legal team. We truly thought we might be getting shut down given the fact that they now had a licensed competing circuit and partner in Panda.

Once we joined the call, we were very surprised to hear just the opposite.

Nintendo reached out to us to let us know that they had been watching us build over the years, and wanted to see if we were interested in working with them and pursuing a license as well. They made it clear that Panda’s partnership was not exclusive, and they said it had “not gone unnoticed” that we had not infringed on their IP regarding game modifications and had represented Nintendo’s values well. They made it clear that game modifications were their primary concern in regards to “coming down on events”, which also made sense to us given their enforcement over the past few years in that regard.

That lengthy conversation changed our perspective on Nintendo at a macro level; it was incredibly refreshing to talk to multiple senior team members and clear the air on a lot of miscommunications and misgivings in the years prior. We explained why so many in the community were hesitant to reach out to Nintendo to work together, and we truly believed Nintendo was taking a hard look at their relationship with the community, and ways to get involved in a positive manner.

Guess not! In addition to Nintendo now stipulating that tournaments could only run with an official license—something SWT had not been successful applying for—the team also allege that Panda went around undermining them to the organisers of individual events (the World Tour would have been an umbrella linking these together), and that while Nintendo continued saying nice things to their faces, Panda had told these grassroost organisers that the Smash World Tour was definitely getting shut down, which made them reluctant to come onboard.

You can read the full announcement here, which goes into a lot more detail, and closes with an appeal “that Nintendo reconsiders how it is currently proceeding with their relationship with the Smash community, as well as its partners”.

UPDATE 12:16am ET, November 30: A Nintendo spokesperson tells Kotaku:

Unfortunately after continuous conversations with Smash World Tour, and after giving the same deep consideration we apply to any potential partner, we were unable to come to an agreement with SWT for a full circuit in 2023. Nintendo did not request any changes to or cancellation of remaining events in 2022, including the 2022 Championship event, considering the negative impact on the players who were already planning to participate.

UPDATE 2 1:51am ET, November 30: SWT’s oragnizers have disputed Nintendo’s statement, issuing a follow-up of their own which reads:

We did not expect to have to address this, but Nintendo’s response via Kotaku has been brought to our attention:

“Unfortunately after continuous conversations with Smash World Tour, and after giving the same deep consideration we apply to any potential partner, we were unable to come to an agreement with SWT for a full circuit in 2023. Nintendo did not request any changes to or cancellation of remaining events in 2022, including the 2022 Championship event, considering the negative impact on the players who were already planning to participate.”

We are unsure why they are taking this angle, especially in light of the greater statement and all that it contains.

To reiterate from the official statement:

“As a last ditch effort, we asked if we could continue running the Championships and the Tour next year without a license, and shift our focus to working with them in 2024. We alluded to how the last year functioned in that capacity, with a mutual understanding that we would not get shut down and focus on the future. We were told directly that those times were now over. This was the final nail in the coffin given our very particular relationship with Nintendo. This is when we realized it truly was all being shut down for real. We asked if they understood the waves that would be made if we were forced to cancel, and Nintendo communicated that they were indeed aware.”

To be clear, we asked Nintendo multiple times if they had considered the implications of canceling the Championships as well as next year’s Tour. They affirmed that they had considered all variables.

We received this statement in writing from Nintendo shortly after our call:

“It is Nintendo’s expectation that an approved license be secured in order to operate any commercial activity featuring Nintendo IP. It is also expected to secure such a license well in advance of any public announcement. After further review, we’ve found that the Smash World Tour has not met these expectations around health & safety guidelines and has not adhered to our internal partner guidelines. Nintendo will not be able to grant a license for the Smash World Tour Championship 2022 or any Smash World Tour activity in 2023.”

To be clear, we did not even submit an application for 2023 yet, the license application was for the 2022 Championships (submitted in April). Nintendo including all 2023 activity was an addition we were not even expecting. In our call that accompanied the statement, we asked multiple times if we would be able to continue to operate without a license as we had in years past with the same “unofficial” understanding with Nintendo. We were told point blank that those “times are over.” They followed up the call with their statement in writing, again confirming both the 2022 Championships and all 2023 activity were in the exact same boat.

Source: Nintendo Shuts Down Smash World Tour ‘Without Any Warning’

Mercedes locks faster acceleration behind a yearly $1,200 subscription – the car can already go faster, they slowed you down

Mercedes is the latest manufacturer to lock auto features behind a subscription fee, with an upcoming “Acceleration Increase” add-on that lets drivers pay to access motor performance their vehicle is already capable of.

The $1,200 yearly subscription improves performance by boosting output from the motors by 20–24 percent, increasing torque, and shaving around 0.8 to 0.9 seconds off 0–60 mph acceleration when in Dynamic drive mode (via The Drive). The subscription doesn’t come with any physical hardware upgrades — instead, it simply unlocks the full capabilities of the vehicle, indicating that Mercedes intentionally limited performance to later sell as an optional extra. Acceleration Increase is only available for the Mercedes-EQ EQE and Mercedes-EQ EQS electric car models.

[…]

This comes just months after BMW sparked outrage by similarly charging an $18 monthly subscription in some countries for owners to use the heated seats already installed within its vehicles, just one of many features paywalled by the car manufacturer since 2020. BMW had previously also tried (and failed) to charge its customers $80 a month to access Apple CarPlay and Android Auto — features that other vehicle makers have included for free.

Source: Mercedes locks faster acceleration behind a yearly $1,200 subscription – The Verge

So they are basically saying you don’t really own the product you spent around $100 000,- to buy.

Google Settles 40 States’ Location Data Suit for only $392 Million

Google agreed to a $391.5 million dollar settlement on Monday to end a lawsuit accusing the tech giant of tricking users with location data privacy settings that didn’t actually turn off data collection. The payout, the result of a suit brought by 40 state attorneys general, marks one of the biggest privacy settlements in history. Google also promised to make additional changes to clarify its location tracking practices next year.

“For years Google has prioritized profit over their users’ privacy,” said Ellen Rosenblum, Oregon’s attorney general who co-lead the case, in a press release. “They have been crafty and deceptive. Consumers thought they had turned off their location tracking features on Google, but the company continued to secretly record their movements and used that information for advertisers.”

[…]

The attorneys’ investigation into Google and subsequent lawsuit came after a 2018 report that found Google’s Location History setting didn’t stop the company’s location tracking, even though the setting promised that “with Location History off, the places you go are no longer stored.” Google quickly updated the description of its settings, clarifying that you actually have to turn off a completely different setting called Web & App Activity if you want the company to stop following you around.

[…]

Despite waves of legal and media attention, Google’s location settings are still confusing, according to experts in interface design. The fine print makes it clear that you need to change multiple settings if you don’t want Google collecting data about everywhere you go, but you have to read carefully. It remains to be seen how clearly the changes the company promised in the settlement will communicate its data practices.

[…]

 

Source: Google Settles 40 States’ Location Data Suit for $392 Million

Apple Vanquishes Evil YouTube Account Full Of Old Apple WWDC Videos

Many of you are likely to be familiar with WWDC, Apple’s Worldwide Developer Conference. This is one of those places where you get a bunch of Apple product reveals and news updates that typically result in the press tripping all over themselves to bow at the altar of an iPhone 300 or whatever. The conference has been going on for decades and one enterprising YouTube account made a point of archiving video footage from past events so that any interested person could go back and see the evolution of the company.

Until now, that is, since Apple decided to copyright-strike Brendan Shanks account to hell.

 

Now, he’s going to be moving the videos over to the Internet Archive, but that will take time and I suppose there’s nothing keeping Apple from turning its copyright guns to that site as well. In the meantime, this treasure trove of videos that Apple doesn’t seem to want to bother hosting itself is simply gone.

Now, did Shanks have permission from Apple to post those videos? He says no. Does that mean that Apple can take copyright action on them? Sure does! But why is the question. Why are antiquated videos interesting mostly to hobbyists worth all this chaos and bad PR?

The videos in question were decades-old recordings of WWDC events.

Due to the multiple violations, not only were the videos removed, but Shanks’ YouTube channel has been disabled. In addition to losing the archive, Shanks also lost his personal YouTube account, as well as his YouTube TV, which he’d just paid for.

And so here we are again, with a large company killing off a form of preservation effort in the name of draconian copyright enforcement. Good times.

Source: Apple Vanquishes Evil YouTube Account Full Of Old Apple WWDC Videos | Techdirt

Apple Apps Track You Even With Privacy Protections on – and they hoover a LOT

For all of Apple’s talk about how private your iPhone is, the company vacuums up a lot of data about you. iPhones do have a privacy setting that is supposed to turn off that tracking. According to a new report by independent researchers, though, Apple collects extremely detailed information on you with its own apps even when you turn off tracking, an apparent direct contradiction of Apple’s own description of how the privacy protection works.

The iPhone Analytics setting makes an explicit promise. Turn it off, and Apple says that it will “disable the sharing of Device Analytics altogether.” However, Tommy Mysk and Talal Haj Bakry, two app developers and security researchers at the software company Mysk, took a look at the data collected by a number of Apple iPhone apps—the App Store, Apple Music, Apple TV, Books, and Stocks. They found the analytics control and other privacy settings had no obvious effect on Apple’s data collection—the tracking remained the same whether iPhone Analytics was switched on or off.

[…]

The App Store appeared to harvest information about every single thing you did in real time, including what you tapped on, which apps you search for, what ads you saw, and how long you looked at a given app and how you found it. The app sent details about you and your device as well, including ID numbers, what kind of phone you’re using, your screen resolution, your keyboard languages, how you’re connected to the internet—notably, the kind of information commonly used for device fingerprinting.

“Opting-out or switching the personalization options off did not reduce the amount of detailed analytics that the app was sending,” Mysk said. “I switched all the possible options off, namely personalized ads, personalized recommendations, and sharing usage data and analytics.”

[…]

Most of the apps that sent analytics data shared consistent ID numbers, which would allow Apple to track your activity across its services, the researchers found.

[…]

In the App Store, for example, the fact that you’re looking at apps related to mental health, addiction, sexual orientation, and religion can reveal things that you might not want sent to corporate servers.

It’s impossible to know what Apple is doing with the data without the company’s own explanation, and as is so often the case, Apple has been silent so far

[…]

You can see what the data looks like for yourself in the video Mysk posted to Twitter, documenting the information collected by the App Store:

The App Store on your iPhone is watching your every move

This isn’t an every-app-is-tracking-me-so-what’s-one-more situation. These findings are out of line with standard industry practices, Mysk says. He and his research partner ran similar tests in the past looking at analytics in Google Chrome and Microsoft Edge. In both of those apps, Mysk says the data isn’t sent when analytics settings are turned off.

[…]

Source: Apple Apps Track You Even With Privacy Protections on: Report

Senator Wyden Asks State Dept. To Explain Why It’s Handing Out ‘Unfettered’ Access To Americans’ Passport Data

[…]

In 2018, a blockbuster report detailed the actions of CBP agent Jeffrey Rambo. Rambo apparently took it upon himself to track down whistleblowers and leakers. To do this, he cozied up to a journalist and leveraged the wealth of data on travelers collected by federal agencies in hopes of sniffing out sources.

A few years later, another report delved deeper into the CPB and Rambo’s actions. This reporting — referencing a still-redacted DHS Inspector General’s report — showed the CBP routinely tracked journalists (as well as activists and immigration lawyers) via a national counter-terrorism database. This database was apparently routinely queried for reasons unrelated to national security objectives and the information obtained was used to open investigations targeting journalists.

That report remains redacted nearly a year later. But Senator Ron Wyden is demanding answers from the State Department about its far too cozy relationship with other federal agencies, including the CBP.

The State Department is giving law enforcement and intelligence agencies unrestricted access to the personal data of more than 145 million Americans, through information from passport applications that is shared without legal process or any apparent oversight, according to a letter sent from Sen. Ron Wyden to Secretary of State Antony Blinken and obtained by Yahoo News.

The information was uncovered by Wyden during his ongoing probe into reporting by Yahoo News about Operation Whistle Pig, a wide-ranging leak investigation launched by a Border Patrol agent and his supervisors at the U.S. Customs and Border Protection’s National Targeting Center.

On Wednesday, Wyden sent a letter to Blinken requesting detailed information on which federal agencies are provided access to State Department passport information on U.S. citizens.

The letter [PDF] from Wyden points out that the State Department is giving “unfettered” access to at least 25 federal agencies, including DHS components like the CBP. The OIG report into “Operation Whistle Pig” (the one that remains redacted) details Agent Rambo’s actions. Subsequent briefings by State Department officials provided more details that are cited in Wyden’s letter.

More than 25 agencies, but the State Department has, so far refused to identify them.

Department officials declined to identify the specific agencies, but said that both law enforcement and intelligence agencies can access the [passport application] database. They further stated that, while the Department is not legally required to provide other agencies with such access, the Department has done so without requiring these other agencies to obtain compulsory legal process, such as a subpoena or court order.

Sharing is caring, the State Department believes. However, it cannot explain why it feels this passport application database should be an open book to whatever government agencies seek access to it. This is unacceptable, says Senator Wyden. Citing the “clear abuses” by CBP personnel detailed in the Inspector General’s report, Wyden is demanding details the State Department has so far refused to provide, like which agencies have access and the number of times these agencies have accessed the Department’s database.

Why? Because rights matter, no matter what the State Department and its beneficiaries might think.

The Department’s mission does include providing dozens of other government agencies with self-service access to 145 million American’s personal data. The Department has voluntarily taken on this role, and in doing so, prioritized the interests of other agencies over those of law-abiding Americans

That’s the anger on behalf of millions expressed by Senator Wyden. There are also demands. Wyden not only wants answers, he wants changes. He has instructed the State Department to put policies in place to ensure the abuses seen in “Operation Whistle Pig” do not reoccur. He also says the Department should notify Americans when their passport application info is accessed or handed over to government agencies. Finally, he instructs the Department to provide annual statistics on outside agency access to the database, so Americans can better understand who’s going after their data.

So, answers and changes, things federal agencies rarely enjoy engaging with. The answers are likely to be long in coming. The requested changes, even more so. But at least this drags the State Department’s dirty laundry out into the daylight, which makes it a bit more difficult for the Department to continue to ignore a problem it hasn’t addressed for more than three years.

Source: Senator Wyden Asks State Dept. To Explain Why It’s Handing Out ‘Unfettered’ Access To Americans’ Passport Data | Techdirt

Dutch foundation launches mass privacy claim against Twitter – DutchNews.nl

A Dutch foundation is planning to take legal action against social media platform Twitter for illegally collecting and trading in personal details gathered via free apps such as Duolingo and Wordfeud as well as dating apps and weather forecaster Buienradar. Twitter owned advertising platform MoPub between 2013 and January 2022 and that is where the problem lies, the SDBN foundation says. It estimates 11 million people’s information may have been illegally gathered and sold. Between 2013 and 2021, MoPub had access to information gleaned via 30,000 free apps on smartphones and tablets, the foundation says. In essence, the foundation says, consumers ‘paid with their privacy’ without giving permission.

The foundation is demanding compensation on behalf of the apps’ users and if Twitter refuses to pay, the foundation will start a legal case against the company.

Source: Dutch foundation launches mass privacy claim against Twitter – DutchNews.nl

Also Shazam was busy with this – that’s an Apple company. It’s pretty disturbing that this kind of news isn’t a surprise at all any more.

But who is SDBN to collect for Dutch people? I don’t recall them starting up a class action for people to subscribe to and I doubt they will be dividing the money out to the Dutch people either.

Greece To Ban Sale of Spyware After Government Is Accused of Surveillance of opposition party leader

Prime Minister Kyriakos Mitsotakis has announced that Greece would ban the sale of spyware, after his government was accused in a news report of targeting dozens of prominent politicians, journalists and businessmen for surveillance, and the judicial authorities began an investigation. From a report: The announcement is the latest chapter in a scandal that erupted over the summer, when Mr. Mitsotakis conceded that Greece’s state intelligence service had been monitoring an opposition party leader with a traditional wiretap last year. That revelation came after the politician discovered that he had also been targeted with a spyware program known as Predator.

The Greek government said the wiretap was legal but never specified the reasons for it, and Mr. Mitsotakis said it was done without his knowledge. The government has also asserted that it does not own or use the Predator spyware, and has insisted that the simultaneous targeting with a wiretap and Predator was a coincidence.

Source: Greece To Ban Sale of Spyware After Government Is Accused of Surveillance – Slashdot

Microsoft’s GitHub Copilot Sued Over ‘Software Piracy on an Unprecedented Scale’

“Microsoft’s GitHub Copilot is being sued in a class action lawsuit that claims the AI product is committing software piracy on an unprecedented scale,” reports IT Pro.

Programmer/designer Matthew Butterick filed the case Thursday in San Francisco, saying it was on behalf of millions of GitHub users potentially affected by the $10-a-month Copilot service: The lawsuit seeks to challenge the legality of GitHub Copilot, as well as OpenAI Codex which powers the AI tool, and has been filed against GitHub, its owner Microsoft, and OpenAI…. “By training their AI systems on public GitHub repositories (though based on their public statements, possibly much more), we contend that the defendants have violated the legal rights of a vast number of creators who posted code or other work under certain open-source licences on GitHub,” said Butterick.

These licences include a set of 11 popular open source licences that all require attribution of the author’s name and copyright. This includes the MIT licence, the GNU General Public Licence, and the Apache licence. The case claimed that Copilot violates and removes these licences offered by thousands, possibly millions, of software developers, and is therefore committing software piracy on an unprecedented scale.

Copilot, which is entirely run on Microsoft Azure, often simply reproduces code that can be traced back to open-source repositories or licensees, according to the lawsuit. The code never contains attributions to the underlying authors, which is in violation of the licences. “It is not fair, permitted, or justified. On the contrary, Copilot’s goal is to replace a huge swath of open source by taking it and keeping it inside a GitHub-controlled paywall….” Moreover, the case stated that the defendants have also violated GitHub’s own terms of service and privacy policies, the DMCA code 1202 which forbids the removal of copyright-management information, and the California Consumer Privacy Act.
The lawsuit also accuses GitHub of monetizing code from open source programmers, “despite GitHub’s pledge never to do so.”

And Butterick argued to IT Pro that “AI systems are not exempt from the law… If companies like Microsoft, GitHub, and OpenAI choose to disregard the law, they should not expect that we the public will sit still.” Butterick believes AI can only elevate humanity if it’s “fair and ethical for everyone. If it’s not… it will just become another way for the privileged few to profit from the work of the many.”

Reached for comment, GitHub pointed IT Pro to their announcement Monday that next year, suggested code fragments will come with the ability to identify when it matches other publicly-available code — or code that it’s similar to.

The article adds that this lawsuit “comes at a time when Microsoft is looking at developing Copilot technology for use in similar programmes for other job categories, like office work, cyber security, or video game design, according to a Bloomberg report.”

Source: Microsoft’s GitHub Copilot Sued Over ‘Software Piracy on an Unprecedented Scale’ – Slashdot