You don’t own what you buy: Roald Dahl eBooks Censored Remotely after you bought them

“Owners of Roald Dahl ebooks are having their libraries automatically updated with the new censored versions containing hundreds of changes to language related to weight, mental health, violence, gender and race,” reports the British newspaper the Times. Readers who bought electronic versions of the writer’s books, such as Matilda and Charlie and the Chocolate Factory, before the controversial updates have discovered their copies have now been changed.

Puffin Books, the company which publishes Dahl novels, updated the electronic novels, in which Augustus Gloop is no longer described as fat or Mrs Twit as fearfully ugly, on devices such as the Amazon Kindle. Dahl’s biographer Matthew Dennison last night accused the publisher of “strong-arming readers into accepting a new orthodoxy in which Dahl himself has played no part.”
Meanwhile…

  • Children’s book author Frank Cottrell-Boyce admits in the Guardian that “as a child I disliked Dahl intensely. I felt that his snobbery was directed at people like me and that his addiction to revenge was not good. But that was fine — I just moved along.”

But Cottrell-Boyce’s larger point is “The key to reading for pleasure is having a choice about what you read” — and that childhood readers faces greater threats. “The outgoing children’s laureate Cressida Cowell has spent the last few years fighting for her Life-changing Libraries campaign. It’s making a huge difference but it would have a been a lot easier if our media showed a fraction of the interest they showed in Roald Dahl’s vocabulary in our children.”

Source: Roald Dahl eBooks Reportedly Censored Remotely – Slashdot

Signal says it will shut down in UK over Online Safety Bill, which wants to install spyware on all your devices

[…]

The Online Safety Bill contemplates bypassing encryption using device-side scanning to protect children from harmful material, and coincidentally breaking the security of end-to-end encryption at the same time. It’s currently being considered in Parliament and has been the subject of controversy for months.

[ something something saving children – that’s always a bad sign when they trot that one out ]

The legislation contains what critics have called “a spy clause.” [PDF] It requires companies to remove child sexual exploitation and abuse (CSEA) material or terrorist content from online platforms “whether communicated publicly or privately.” As applied to encrypted messaging, that means either encryption must be removed to allow content scanning or scanning must occur prior to encryption.

Signal draws the line

Such schemes have been condemned by technical experts and Signal is similarly unenthusiastic.

“Signal is a nonprofit whose sole mission is to provide a truly private means of digital communication to anyone, anywhere in the world,” said Meredith Whittaker, president of the Signal Foundation, in a statement provided to The Register.

“Many millions of people globally rely on us to provide a safe and secure messaging service to conduct journalism, express dissent, voice intimate or vulnerable thoughts, and otherwise speak to those they want to be heard by without surveillance from tech corporations and governments.”

“We have never, and will never, break our commitment to the people who use and trust Signal. And this means that we would absolutely choose to cease operating in a given region if the alternative meant undermining our privacy commitments to those who rely on us.”

Asked whether she was concerned that Signal could be banned under the Online Safety rules, Whittaker told The Register, “We were responding to a hypothetical, and we’re not going to speculate on probabilities. The language in the bill as it stands is deeply troubling, particularly the mandate for proactive surveillance of all images and texts. If we were given a choice between kneecapping our privacy guarantees by implementing such mass surveillance, or ceasing operations in the UK, we would cease operations.”

[…]

“If Signal withdraws its services from the UK, it will particularly harm journalists, campaigners and activists who rely on end-to-end encryption to communicate safely.”

[…]

 

Source: Signal says it will shut down in UK over Online Safety Bill

Google’s Play Store Privacy Labels Are a ‘Total Failure:’ Study

[…]

“There are two main problems here,” Mozilla’s Caltrider said. “The first problem is Google only requires the information in labels to be self-reported. So, fingers crossed, because it’s the honor system, and it turns out that most labels seem to be misleading.”

Google promises to make apps fix problems it finds in the labels, and threatens to ban apps that don’t get in compliance. But the company has never provided any details about how it polices apps. Google said it’s vigilant about enforcement but didn’t give any details about its enforcement process, and didn’t respond to a question about any enforcement actions it’s taken in the past.

[…]

Of course, Google could just read the privacy policies where apps spell out these practices, like Mozilla did, but there’s a bigger issue at play. These apps may not even be breaking Google’s privacy label rules, because those rules are so relaxed that “they let companies lie,” Caltrider said.

“That’s the second problem. Google’s own rules for what data practices you have to disclose are a joke,” Caltrider said. “The guidelines for the labels make them useless.”

If you go looking at Google’s rules for the data safety labels, which are buried deep in a cascading series of help menus, you’ll learn that there is a long list of things that you don’t have to tell your users about. In other words, you can say you don’t collect data or share it with third parties, while you do in fact collect data and share it with third parties.

For example, apps don’t have to disclose data sharing it if they have “consent” to share the data from users, or if they’re sharing the data with “service providers,” or if the data is “anonymized” (which is nonsense), or if the data is being shared for “specific legal purposes.” There are similar exceptions for what counts as data collection. Those loopholes are so big you could fill up a truck with data and drive it right on through.

[…]

Source: Google’s Play Store Privacy Labels Are a ‘Total Failure:’ Study

Which goes to show again, walled garden app stores really are no better than just downloading stuff from the internet, unless you’re the owner of the walled garden and collect 30% revenue for doing basically not much.

AI-created images lose U.S. copyrights in test for new technology

Images in a graphic novel that were created using the artificial-intelligence system Midjourney should not have been granted copyright protection, the U.S. Copyright Office said in a letter seen by Reuters.

“Zarya of the Dawn” author Kris Kashtanova is entitled to a copyright for the parts of the book Kashtanova wrote and arranged, but not for the images produced by Midjourney, the office said in its letter, dated Tuesday.

The decision is one of the first by a U.S. court or agency on the scope of copyright protection for works created with AI, and comes amid the meteoric rise of generative AI software like Midjourney, Dall-E and ChatGPT.

The Copyright Office said in its letter that it would reissue its registration for “Zarya of the Dawn” to omit images that “are not the product of human authorship” and therefore cannot be copyrighted.

The Copyright Office had no comment on the decision.

Kashtanova on Wednesday called it “great news” that the office allowed copyright protection for the novel’s story and the way the images were arranged, which Kashtanova said “covers a lot of uses for the people in the AI art community.”

Kashtanova said they were considering how best to press ahead with the argument that the images themselves were a “direct expression of my creativity and therefore copyrightable.”

Midjourney general counsel Max Sills said the decision was “a great victory for Kris, Midjourney, and artists,” and that the Copyright Office is “clearly saying that if an artist exerts creative control over an image generating tool like Midjourney …the output is protectable.”

Midjourney is an AI-based system that generates images based on text prompts entered by users. Kashtanova wrote the text of “Zarya of the Dawn,” and Midjourney created the book’s images based on prompts.

The Copyright Office told Kashtanova in October it would reconsider the book’s copyright registration because the application did not disclose Midjourney’s role.

The office said on Tuesday that it would grant copyright protection for the book’s text and the way Kashtanova selected and arranged its elements. But it said Kashtanova was not the “master mind” behind the images themselves.

“The fact that Midjourney’s specific output cannot be predicted by users makes Midjourney different for copyright purposes than other tools used by artists,” the letter said.

Source: AI-created images lose U.S. copyrights in test for new technology | Reuters

I am not sure why they are calling this a victory, as the court is basically reiterating that what she created is hers and what an AI created cannot be copyrighted by her or by the AI itself. That’s a loss for the AI.

MetaGuard: Going Incognito in the Metaverse

[…]

with numerous recent studies showing the ease at which VR users can be profiled, deanonymized, and data harvested, metaverse platforms carry all the privacy risks of the current internet and more while at present having none of the defensive privacy tools we are accustomed to using on the web. To remedy this, we present the first known method of implementing an “incognito mode” for VR. Our technique leverages local ε-differential privacy to quantifiably obscure sensitive user data attributes, with a focus on intelligently adding noise when and where it is needed most to maximize privacy while minimizing usability impact. Moreover, our system is capable of flexibly adapting to the unique needs of each metaverse application to further optimize this trade-off. We implement our solution as a universal Unity (C#) plugin that we then evaluate using several popular VR applications. Upon faithfully replicating the most well known VR privacy attack studies, we show a significant degradation of attacker capabilities when using our proposed solution.

[…]

Source: MetaGuard: Going Incognito in the Metaverse | Berkeley RDI

3 motion points allow you to be identified within seconds in VR

[..]

In a paper provided to The Register in advance of its publication on ArXiv, academics Vivek Nair, Wenbo Guo, Justus Mattern, Rui Wang, James O’Brien, Louis Rosenberg, and Dawn Song set out to test the extent to which individuals in VR environments can be identified by body movement data.

The boffins gathered telemetry data from more than 55,000 people who played Beat Saber, a VR rhythm game in which players wave hand controllers to music. Then they digested 3.96TB of data, from game leaderboard BeatLeader, consisting of 2,669,886 game replays from 55,541 users during 713,013 separate play sessions.

These Beat Saber Open Replay (BSOR) files contained metadata (devices and game settings), telemetry (measurements of the position and orientation of players’ hands, head, and so on), context info (type, location, and timing of in-game stimuli), and performance stats (responses to in-game stimuli).

From this, the researchers focused on the data derived from the head and hand movements of Beat Saber players. Just five minutes of those three data points proved enough to train a classification model that, given 100 minutes of motion data from the game, could uniquely identify the player 94 percent of the time. And with just 10 seconds of motion data, the classification model managed accuracy of 73 percent.

“The study demonstrates that over 55k ‘anonymous’ VR users can be de-anonymized back to the exact individual just by watching their head and hand movements for a few seconds,” said Vivek Nair, a UC Berkeley doctoral student and one of the authors of the paper, in an email to The Register.

“We have known for a long time that motion reveals information about people, but what this study newly shows is that movement patterns are so unique to an individual that they could serve as an identifying biometric, on par with facial or fingerprint recognition. This really changes how we think about the notion of ‘privacy’ in the metaverse, as just by moving around in VR, you might as well be broadcasting your face or fingerprints at all times!”

[…]

“There have been papers as early as the 1970s which showed that individuals can identify the motion of their friends,” said Nair. “A 2000 paper from Berkeley even showed that with motion capture data, you can recreate a model of a person’s entire skeleton.”

“What hasn’t been shown, until now, is that the motion of just three tracked points in VR (head and hands) is enough to identify users on a huge (and maybe even global) scale. It’s likely true that you can identify and profile users with even greater accuracy outside of VR when more tracked objects are available, such as with full-body tracking that some 3D cameras are able to do.”

[…]

Nair said he remains optimistic about the potential of systems like MetaGuard – a VR incognito mode project he and colleagues have been working on – to address privacy threats by altering VR in a privacy-preserving way rather than trying to prevent data collection.

The paper suggests similar data defense tactics: “We hope to see future works which intelligently corrupt VR replays to obscure identifiable properties without impeding their original purpose (e.g., scoring or cheating detection).”

One reason to prefer data alteration over data denial is that there may be VR applications (e.g., motion-based medical diagnostics) that justify further investment in the technology, as opposed to propping up pretend worlds just for the sake of privacy pillaging.

[…]

Source: How virtual reality telemetry is the next threat to privacy • The Register

Google’s wants Go reporting telemetry data by default

Russ Cox, a Google software engineer steering the development of the open source Go programming language, has presented a possible plan to implement telemetry in the Go toolchain.

However many in the Go community object because the plan calls for telemetry by default.

These alarmed developers would prefer an opt-in rather than an opt-out regime, a position the Go team rejects because it would ensure low adoption and would reduce the amount of telemetry data received to the point it would be of little value.

Cox’s proposal summarized lengthier documentation in three blog posts.

Telemetry, as Cox describes it, involves software sending data from Go software to a server to provide information about which functions are being used and how the software is performing. He argues it is beneficial for open source projects to have that information to guide development.

“I believe that open-source software projects need to explore new telemetry designs that help developers get the information they need to work efficiently and effectively, without collecting invasive traces of detailed user activity,” he wrote.

[…]

Some people believe they have a right to privacy, to be left alone, and to demand that their rights are respected through opt-in consent.

As developer Louis Thibault put it, “The Go dev team seems not to have internalized the principle of affirmative consent in matters of data collection.”

Others, particularly in the ad industry, but in other endeavors as well, see opt-in as an existential threat. They believe that they have a right to gather data and that it’s better to seek forgiveness via opt-out than to ask for permission unlikely to be given via opt-in.

Source: Google’s Go may add telemetry reporting that’s on by default • The Register

Windows 11 Sends Tremendous Amount of User Data to Third Parties – pretty much spyware for loads of people!

Many programs collect user data and send it back to their developers to improve software or provide more targeted services. But according to the PC Security Channel (via Neowin (opens in new tab)) Microsoft’s Windows 11 sends data not only to the Redmond, Washington-based software giant, but also to multiple third parties.

To analyze DNS traffic generated by a freshly installed copy of Windows 11 on a brand-new notebook, the PC Security Channel used the Wireshark network protocol analyzer that reveals precisely what is happening on a network. The results were astounding enough for the YouTube channel to call Microsoft’s Windows 11 “spyware.”

As it turned out, an all-new Windows 11 PC that was never used to browse the Internet contacted not only Windows Update, MSN and Bing servers, but also Steam, McAfee, geo.prod.do, and Comscore ScorecardResearch.com. Apparently, the latest operating system from Microsoft collected and sent telemetry data to various market research companies, advertising services, and the like.

To prove the point, the PC Security Channel tried to find out what Windows XP contacted after a fresh install using the same tool and it turned out that the only things that the 20+ years old operating system contacted were Windows Update and Microsoft Update servers.

“As with any modern operating system, users can expect to see data flowing to help them remain secure, up to date, and keep the system working as anticipated,” a Microsoft spokesperson told Tom’s Hardware. “We are committed to transparency and regularly publish information about the data we collect to empower customers to be more informed about their privacy.”

Some of the claims may be, technically, overblown. Telemetry data is mentioned in Windows’ terms of service, which many people skip over to use the operating system. And you can choose not to enable at least some of this by turning off settings the first time to boot into the OS.

“By accepting this agreement and using the software you agree that Microsoft may collect, use, and disclose the information as described in the Microsoft Privacy Statement (aka.ms/privacy), and as may be described in the user interface associated with the software features,” the terms of service read (opens in new tab). It also points out that some data-sharing settings can be turned off.

Obviously, a lot has changed in 20 years and we now use more online services than back in the early 2000s. As a result, various telemetry data has to be sent online to keep certain features running. But at the very least, Microsoft should do a better job of expressly asking for consent and stating what will be sent and where, because you can’t opt out of all of the data-sharing “features.” The PC Security Channel warns that even when telemetry tracking is disabled by third-party utilities, Windows 11 still sends certain data.

Source: Windows 11 Sends Tremendous Amount of User Data to Third Parties, YouTuber Claims (Update) | Tom’s Hardware

Just when you thought Microsoft was the good guys again and it was all Google, Apple, Amazon, Meta/Facebook being evil they are back at it to prove they still have it!

Microsoft won’t access private data in Office version scan installed as OS update they say

Microsoft wants everyone to know that it isn’t looking to invade their privacy while looking through their Windows PCs to find out-of-date versions of Office software.

In its KB5021751 update last month, Microsoft included a plan to scan Windows systems to smoke out those Office versions that are no longer supported or nearing the end of support. Those include Office 2007 (which saw support end in 2017) and Office 2010 (in 2020) and the 2013 build (this coming April).

The company stressed that it would run only one time and would not install anything on the user’s Windows system, adding that the file for the update is scanned to ensure it’s not infected by malware and is stored on highly secure servers to prevent unauthorized changes to it.

The update caused some discussion among users, at least enough to convince Microsoft to make another pitch that it is respecting user privacy and won’t access private data despite scanning their systems.

The update collects diagnostic and performance data so that it can determine the use of various versions of Office and how to best support and service them, the software maker wrote in an expanded note this week. The update will silently run once to collect the data and no files are left on the user’s systems once the scan is completed.

“This data is gathered from registry entries and APIs,” it wrote. “The update does not gather licensing details, customer content, or data about non-Microsoft products. Microsoft values, protects, and defends privacy.”

[…]

Source: Microsoft won’t access private data in Office version scan • The Register

Of course, just sending data about what version of Office is installed is in fact sending private data about stuff installed on your PC. This is Not OK.

Claims Datadog asked developer to kill open source data tool, which he did. And now he’s ressurected it.

After a delay of over a year, an open source code contribution to enable the export of data from Datadog’s Application Performance Monitoring (APM) platform finally got merged on Tuesday into a collection of OpenTelemetry components.

The reason for the delay, according to John Dorman, the software developer who wrote the Datadog APM Receiver code, is that, about a year ago, Datadog asked him not to submit the software.

On February 8 last year Dorman, who goes by the name “boostchicken” on GitHub, announced that he was closing his pull request – the git term for programming code contributed to a project.

“After some consideration I’ve decided to close this PR [pull request],” he wrote. “[T]here are better ways to OTEL [OpenTelemetry] support w/ Datadog.”

Members of the open source community who are focused on application monitoring – collecting and analyzing logs, traces of app activity, and other metrics that can be useful to keep applications running – had questions, claiming that DataDog prefers to lock customers into their product.

Shortly after the post, Charity Majors, CEO of Honeycomb.io, a rival application monitoring firm, wrote a Twitter thread elaborating on the benefits of OpenTelemetry and calling out Datadog for only supporting OTEL as a one-way street.

“Datadog has been telling users they can use OTEL to get data in, but not get data out,” Majors wrote. “The Datadog OTEL collector PR was silently killed. The person who wrote it appears to have been pressured into closing it, and nothing has been proposed to replace it.”

Behavior of this sort would be inconsistent with the goals of the Cloud Native Computing Foundation’s (CNCF) OpenTelemetry project, which seeks “to provide a set of standardized vendor-agnostic SDKs, APIs, and tools for ingesting, transforming, and sending data to an Observability back-end (i.e. open source or commercial vendor).”

That is to say, the OpenTelemetry project aims to promote data portability, instead of hindering it, as is common among proprietary software vendors.

The smoking hound

On January 26 Dorman confirmed suspicions that he had been approached by Datadog and asked not to proceed with his efforts.

“I owe the community an apology on this one,” Dorman wrote in his pull request thread. “I lacked the courage of my convictions and when push came to shove and I had to make the hard choice, I took the easy way out.”

“Datadog ‘asked’ me to kill this pull request. There were other members from my organization present that let me know this answer will be a ‘ok’. I am sure I could have said no, at the moment I just couldn’t fathom opening Pandora’s Box. There you have it, no NDA, no stack of cash. I left the code hoping someone could carry on. I was willing to give [Datadog] this code, no strings attached as long as it moved OTel forward. They declined.”

He added, “However, I told them if you don’t support OpenTelemetry in a meaningful way, I will start sending pull requests again. So here we are. I feel I have given them enough time to do the right thing.”

Indeed, Dorman subsequently re-opened his pull request, which on Tuesday was merged into the repository for Open Telemetry Collector components. His Datadog ARM Receiver can ingest traces in the Datadog Trace Agent Format.

Coincidentally, Datadog on Tuesday published a blog post titled, “Datadog’s commitment to OpenTelemetry and the open source community.” It makes no mention of the alleged request to “kill [the] pull request.” Instead, it enumerates various ways in which the company has supported OpenTelemetry recently.

The Register asked Datadog for comment. We’ve not heard back.

Dorman, who presently works for Meta, did not respond to a request for comment. However, last week, via Twitter, he credited Grafana, an open source Datadog competitor, for having “formally sponsored” the work and for pointing out that Datadog “refuses to support OTEL in meaningful ways.”

The OpenTelemetry Governance Committee for the CNCF provided The Register with the following statement:

“We’re still trying to make sense of what happened here; we’ll comment on it once we have a full understanding. Regardless, we are happy to review and accept any contributions which push the project forward, and this [pull request] was merged yesterday,” it said.

Source: Claims Datadog asked developer to kill open source data tool • The Register

FTC Fines GoodRx $1.5M for Sending Medication Data to Facebook, Google, others

The Federal Trade Commission took historic action against the medication discount service GoodRx Wednesday, issuing a $1.5 million fine against the company for sharing data about users’ prescriptions with Facebook, Google, and others. It’s a move that could usher in a new era of health privacy in the United States.

“Digital health companies and mobile apps should not cash in on consumer’s extremely sensitive and personally identifiable health information,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement.

[…]

In addition to a fine, GoodRx has agreed to a first-of-its-kind provision banning the company from sharing health data with third parties for advertising purposes. That may sound unsurprising, but many consumers don’t realize that health privacy laws generally don’t apply to companies that aren’t affiliated with doctors or insurance companies.

[…]

GoodRx is a health technology company that gives out free coupons for discounts on common medications. The company also connects users with healthcare providers for telehealth visits. GoodRx also shared data about the prescriptions you’re buying and looking up with third-party advertising companies, which incurred the ire of the FTC.

GoodRx’s privacy problems were first uncovered by this reporter in an investigation with Consumer Reports, followed by a similar report in Gizmodo. At the time, if you looked up Viagra, Prozac, PrEP, or any other medication, GoodRx would tell Facebook, Google, and a variety of companies in the ad business, such as Criteo, Branch, and Twilio. GoodRx wasn’t selling the data. Instead, it shared the information so those companies could help GoodRx target its own customers with ads for more drugs.

[…]

Source: FTC Fines GoodRx $1.5M for Sending Medication Data to Facebook

An AI robot lawyer was set to argue in court. Scared lawyers shut it down with jail threats

A British man who planned to have a “robot lawyer” help a defendant fight a traffic ticket has dropped the effort after receiving threats of possible prosecution and jail time.

Joshua Browder, the CEO of the New York-based startup DoNotPay, created a way for people contesting traffic tickets to use arguments in court generated by artificial intelligence.

Here’s how it was supposed to work: The person challenging a speeding ticket would wear smart glasses that both record court proceedings and dictate responses into the defendant’s ear from a small speaker. The system relied on a few leading AI text generators, including ChatGPT and DaVinci.

The first-ever AI-powered legal defense was set to take place in California on Feb. 22, but not anymore.

As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in.

“Multiple state bars have threatened us,” Browder said. “One even said a referral to the district attorney’s office and prosecution and prison time would be possible.”

In particular, Browder said one state bar official noted that the unauthorized practice of law is a misdemeanor in some states punishable up to six months in county jail.

“Even if it wouldn’t happen, the threat of criminal charges was enough to give it up,” he said. “The letters have become so frequent that we thought it was just a distraction and that we should move on.”

State bar organizations license and regulate attorneys, as a way to ensure people hire lawyers who understand the law.

Browder refused to cite which state bar in particular sent letters, and what official made the threat of possible prosecution, saying his startup, DoNotPay, is under investigation by multiple state bars, including California’s.

[…]

“The truth is, most people can’t afford lawyers,” he said. “This could’ve shifted the balance and allowed people to use tools like ChatGPT in the courtroom that maybe could’ve helped them win cases.”

The future of robot lawyers faces uncertainty for another reason that is far simpler than the bar officials’ existential questions: courtroom rules.

Recording audio during a live legal proceeding is not permitted in federal court and is often prohibited in state courts. The AI tools developed by DoNotPay, which remain completely untested in actual courtrooms, require recording audio of arguments in order for the machine-learning algorithm to generate responses.

“I think calling the tool a ‘robot lawyer’ really riled a lot of lawyers up,” Browder said. “But I think they’re missing the forest for the trees. Technology is advancing and courtroom rules are very outdated.”

 

Source: An AI robot lawyer was set to argue in court. Real lawyers shut it down. : NPR

Lawyers protecting their own at the cost of the population? Who’d have thunk it?

Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator

Meta’s (META.O) WhatsApp subsidiary was fined 5.5 million euros ($5.95 million) on Thursday by Ireland’s Data Privacy Commissioner (DPC), its lead EU privacy regulator, for an additional breach of the bloc’s privacy laws.

The DPC also told WhatsApp to reassess how it uses personal data for service improvements following a similar order it issued this month to Meta’s other main platforms, Facebook and Instagram, which stated Meta must reassess the legal basis upon which it targets advertising through the use of personal data.

[…]

Source: Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator | Reuters

Google Accused of Creating Digital Ad Monopoly in New Justice Dept. Suit

The Department of Justice filed a lawsuit against Google Tuesday, accusing the tech giant of using its market power to create a monopoly in the digital advertising business over the course of 15 years.

Google “corrupted legitimate competition in the ad tech industry by engaging in a systematic campaign to seize control of the wide swath of high-tech tools used by publishers, advertisers and brokers, to facilitate digital advertising,” the Justice Department alleges. Eight state attorneys general joined in the suit, filed in Virginia federal court. Google has faced five antitrust suits since 2020.

[…]

Source: Google Accused of Digital Ad Monopoly in New Justice Dept. Suit

Indian Android Users Can Finally Use Alternate Search and Payment Methods and forked Google apps

Android users in India will soon have more control over their devices, thanks to a court ruling. Beginning next month, Indian Android wielders can choose a different billing system when paying for apps and in-app smartphone purchases rather than default to going through the Play Store. Google will also allow Indian users to select a different search engine as their default right as they set up a new device, which might have implications for upcoming EU regulations.

The move comes after a ruling last week by India’s Supreme Court. The trial started late last year when the Competition Commission of India (CCI) fined Google $161 million for imposing restrictions on its manufacturing partners. Google attempted to challenge the order by maintaining this kind of practice would stall the Android ecosystem and that “no other jurisdiction has ever asked for such far-reaching changes.”

[…]

Google also won’t be able to require the installation of its branded apps to grant the license for running Android OS anymore. From now on, device manufacturers in India will be able to license “individual Google apps” as they like for pre-installation rather than needing to bundle the whole kit and caboodle. Google is also updating the Android compatibility requirements for its OEM partners to “build non-compatible or forked variants.”

[…]

Of particular note is seeing how users will react to being able to choose whether to buy apps and other in-app purchases through the Play Store, where Google takes a 30% cut from each transaction, or through an alternative billing service like JIO Money or Paytm—or even Amazon Pay, available in India.

[…]

The Department of Justice in the United States is also suing Google’s parent company, Alphabet, for a second time this week for practices within its digital advertising business, alleging that the company “corrupted legitimate competition in the ad tech industry” to build out its monopoly.

Source: Indian Android Users Can Finally Use Alternate Search and Payment Methods

US law enforcement has warrantless access to many money transfers

Your international money transfers might not be as discreet as you think. Senator Ron Wyden and The Wall Street Journal have learned that US law enforcement can access details of money transfers without a warrant through an obscure surveillance program the Arizona attorney general’s office created in 2014. A database stored at a nonprofit, the Transaction Record Analysis Center (TRAC), provides full names and amounts for larger transfers (above $500) sent between the US, Mexico and 22 other regions through services like Western Union, MoneyGram and Viamericas. The program covers data for numerous Caribbean and Latin American countries in addition to Canada, China, France, Malaysia, Spain, Thailand, Ukraine and the US Virgin Islands. Some domestic transfers also enter the data set.

[…]

The concern, of course, is that officials can obtain sensitive transaction details without court oversight or customers’ knowledge. An unscrupulous officer could secretly track large transfers. Wyden adds that the people in the database are more likely to be immigrants, minorities and low-income residents who don’t have bank accounts and already have fewer privacy protectoins. The American Civil Liberties Union also asserts that the subpoenas used to obtain this data violate federal law. Arizona issued at least 140 of these subpoenas between 2014 and 2021.

[…]

Source: US law enforcement has warrantless access to many money transfers | Engadget

Meta sues surveillance company for allegedly scraping more than 600,000 accounts – pots and kettles

Meta has filed a lawsuit against Voyager Labs, which it has accused of creating tens of thousands of fake accounts to scrape data from more than 600,000 Facebook users’ profiles. It says the surveillance company pulled information such as posts, likes, friend lists, photos, and comments, along with other details from groups and pages. Meta claims that Voyager masked its activity using its Surveillance Software, and that the company has also scraped data from Instagram, Twitter, YouTube, LinkedIn and Telegram to sell and license for profit.

In the complaint, which was obtained by Gizmodo, Meta has asked a judge to permanently ban Voyager from Facebook and Instagram. “As a direct result of Defendant’s unlawful actions, Meta has suffered and continues to suffer irreparable harm for which there is no adequate remedy at law, and which will continue unless Defendant’s actions are enjoined,” the filing reads. Meta said Voyager’s actions have caused it “to incur damages, including investigative costs, in an amount to be proven at trial.”

Meta claims that Voyager scraped data from accounts belonging to “employees of non-profit organizations, universities, news media organizations, healthcare facilities, the armed forces of the United States, and local, state, and federal government agencies, as well as full-time parents, retirees, and union members.” The company noted in a blog post it disabled accounts linked to Voyager and that filed the suit to enforce its terms and policies.

[…]

In 2021, The Guardian reported that the Los Angeles Police Department had tested Voyager’s social media surveillance tools in 2019. The company is said to have told the department that police could use the software to track the accounts of a suspect’s friends on social media, and that the system could predict crimes before they took place by making assumptions about a person’s activity.

According to The Guardian, Voyager has suggested factors like Instagram usernames denoting Arab pride or tweeting about Islam could indicate someone is leaning toward extremism. Other companies, such as Palantir, have worked on predictive policing tech. Critics such as the Electronic Frontier Foundation claim that tech can’t predict crime and that algorithms merely perpetuate existing biases.

Data scraping is an issue that Meta has to take seriously. In 2021, it sued an individual for allegedly scraping data on more than 178 million users. Last November, the Irish Data Protection Commission fined the company €265 million ($277 million) for failing to stop bad actors from obtaining millions of people’s phone numbers and other data, which were published elsewhere online. The regulator said Meta failed to comply with GDPR data protection rules.

Source: Meta sues surveillance company for allegedly scraping more than 600,000 accounts | Engadget

Google will pay $9.5 million to settle Washington DC AG’s location-tracking lawsuit

Google has agreed to pay $9.5 million to settle a lawsuit brought by Washington DC Attorney General Karl Racine, who accused the company earlier this year of “deceiving users and invading their privacy.” Google has also agreed to change some of its practices, primarily concerning how it informs users about collecting, storing and using their location data.

“Google leads consumers to believe that consumers are in control of whether Google collects and retains information about their location and how that information is used,” the complaint, which Racine filed in January, read. “In reality, consumers who use Google products cannot prevent Google from collecting, storing and profiting from their location.”

Racine’s office also accused Google of employing “dark patterns,” which are design choices intended to deceive users into carrying out actions that don’t benefit them. Specifically, the AG’s office claimed that Google repeatedly prompted users to switch in location tracking in certain apps and informed them that certain features wouldn’t work properly if location tracking wasn’t on. Racine and his team found that location data wasn’t even needed for the app in question. They asserted that Google made it “impossible for users to opt out of having their location tracked.”

 

The $9.5 million payment is a paltry one for Google. Last quarter, it took parent company Alphabet under 20 minutes to make that much in revenue. The changes that the company will make to its practices as part of the settlement may have a bigger impact.

Folks who currently have certain location settings on will receive notifications telling them how they can disable each setting, delete the associated data and limit how long Google can keep that information. Users who set up a new Google account will be informed which location-related account settings are on by default and offered the chance to opt out.

Google will need to maintain a webpage that details its location data practices and policies. This will include ways for users to access their location settings and details about how each setting impacts Google’s collection, retention or use of location data.

Moreover, Google will be prevented from sharing a person’s precise location data with a third-party advertiser without the user’s explicit consent. The company will need to delete location data “that came from a device or from an IP address in web and app activity within 30 days” of obtaining the information

[…]

Source: Google will pay $9.5 million to settle Washington DC AG’s location-tracking lawsuit | Engadget

Spy Tech Palantir’s Covid-era UK health contract extended without public consultation or competition

NHS England has extended its contract with US spy-tech biz Palantir for the system built at the height of the pandemic to give it time to resolve the twice-delayed procurement of a data platform to support health service reorganization and tackle the massive care backlog.

The contract has already been subject to the threat of a judicial review, after which NHS England – a non-departmental government body – agreed to three concessions, including the promise of public consultation before extending the contract.

Campaigners and legal groups are set to mount legal challenges around separate, but related, NHS dealing with Palantir.

In a notice published yesterday, the NHS England said the contract would be extended until September 2023 in a deal worth £11.5 million ($13.8 million).

NHS England has been conducting a £360 million ($435 million) procurement of a separate, but linked, Federated Data Platform (FDP), a deal said to be a “must-win” for Palantir, a US data management company which cut its teeth working for the CIA and controversial US immigration agency ICE.

The contract notice for FDP, which kicks off the official competition, was originally expected in June 2022 but was delayed until September 2022, when NHS England told The Register it would be published. The notice has yet to appear

[…]

Source: Palantir’s Covid-era UK health contract extended • The Register

Apple Faces French $8.5M Fine For Illegal Data Harvesting

France’s data protection authority, CNIL, fined Apple €8 million (about $8.5 million) Wednesday for illegally harvesting iPhone owners’ data for targeted ads without proper consent.

[…]

The French fine, though, is the latest addition to a growing body of evidence that Apple may not be the privacy guardian angel it makes itself out to be.

[…]

Apple failed to “obtain the consent of French iPhone users (iOS 14.6 version) before depositing and/or writing identifiers used for advertising purposes on their terminals,” the CNIL said in a statement. The CNIL’s fine calls out the search ads in Apple’s App Store, specifically. A French court fined the company over $1 million in December over its commercial practices related to the App Store.

[…]

Eight million euros is peanuts for a company that makes billions a year on advertising alone and is so inconceivably wealthy that it had enough money to lose $1 trillion in market value last year—making Apple the second company in history to do so. The fine could have been higher but for the fact that Apple’s European headquarters are in Ireland, not France, giving the CNIL a smaller target to go after.

Still, its a signal that Apple may face a less friendly regulatory future in Europe. Commercial authorities are investigating Apple for anti-competitive business practices, and are even forcing the company to abandon its proprietary charging cable in favor of USB-C ports.

Source: Apple Faces Rare $8.5M Fine For Illegal Data Harvesting

John Deere signs right to repair agreement

As farming has become more technology-driven, Deere has increasingly injected software into its products with all of its tractors and harvesters now including an autopilot feature as standard.

There is also the John Deere Operations Center, which “instantly captures vital operational data to boost transparency and increase productivity for your business.”

Within a matter of years, the company envisages having 1.5 million machines and half a billion acres of land connected to the cloud service, which will “collect and store crop data, including millions of images of weeds that can be targeted by herbicide.”

Deere also estimates that software fees will make up 10 percent of the company’s revenues by the end of the decade, with Bernstein analysts pegging the average gross margin for farming software at 85 percent, compared to 25 percent for equipment sales.

Just like other commercial software vendors, however, Deere exercises close control and restricts what can be done with its products. This led farm labor advocacy groups to file a complaint to the US Federal Trade Commission last year, claiming that Deere unlawfully refused to provide the software and technical data necessary to repair its machinery.

“Deere is the dominant force in the $68 billion US agricultural equipment market, controlling over 50 per cent of the market for large tractors and combines,” said Fairmark Partners, the groups’ attorneys, in a preface to the complaint [PDF].

“For many farmers and ranchers, they effectively have no choice but to purchase their equipment from Deere. Not satisfied with dominating just the market for equipment, Deere has sought to leverage its power in that market to monopolize the market for repairs of that equipment, to the detriment of farmers, ranchers, and independent repair providers.”

[…]

The MoU, which can be read here [PDF], was signed yesterday at the 2023 AFBF Convention in San Juan, Puerto Rico, and seems to be a commitment by Deere to improve farmers’ access and choice when it comes to repairs.

[…]

Duvall said on a podcast about the matter that the MoU is the result of several years’ work. “As you use equipment, we all know at some point in time, there’s going to be problems with it. And we did have problems with having the opportunity to repair our equipment where we wanted to, or even repair it on the farm,” he added.

“It ensures that our farmers can repair their equipment and have access to the diagnostic tools and product guides so that they can find the problems and find solutions for them. And this is the beginning of a process that we think is going to be real healthy for our farmers and for the company because what it does is it sets up an opportunity for our farmers to really work with John Deere on a personal basis.”

[…]

Source: John Deere signs right to repair agreement • The Register

But… still gives John Deere access to their data for free?

This may also have something to do with the security of John Deere machines being so incredibly piss poor, mainly due to really bad update hygiene

Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

A startup says it has begun releasing sulfur particles into Earth’s atmosphere, in a controversial attempt to combat climate change by deflecting sunlight. Make Sunsets, a company that sells carbon offset “cooling credits” for $10 each, is banking on solar geoengineering to cool down the planet and fill its coffers. The startup claims it has already released two test balloons, each filled with about 10 grams of sulfur particles and intended for the stratosphere, according to the company’s website and first reported on by MIT Technology Review.

The concept of solar geoengineering is simple: Add reflective particles to the upper atmosphere to reduce the amount of sunlight that penetrates from space, thereby cooling Earth. It’s an idea inspired by the atmospheric side effects of major volcanic eruptions, which have led to drastic, temporary climate shifts multiple times throughout history, including the notorious “year without a summer” of 1816.

Yet effective and safe implementation of the idea is much less simple. Scientists and engineers have been studying solar geoengineering as a potential climate change remedy for more than 50 years. But almost nobody has actually enacted real-world experiments because of the associated risks, like rapid changes in our planet’s precipitation patterns, damage to the ozone layer, and significant geopolitical ramifications.

[…]

if and when we get enough sulfur into the atmosphere to meaningfully cool Earth, we’d have to keep adding new particles indefinitely to avoid entering an era of climate change about four to six times worse than what we’re currently experiencing, according to one 2018 study. Sulfur aerosols don’t stick around very long. Their lifespan in the stratosphere is somewhere between a few days and a couple years, depending on particle size and other factors.

[…]

Rogue agents independently deciding to impose geoengineering on the rest of us has been a concern for as long as the thought of intentionally manipulating the atmosphere has been around. The Pentagon even has dedicated research teams working on methods to detect and combat such clandestine attempts. But effectively defending against solar geoengineering is much more difficult than just doing it.

In Iseman’s rudimentary first trials, he says he released two weather balloons full of helium and sulfur aerosols somewhere in Baja California, Mexico. The founder told MIT Technology Review that the balloons rose toward the sky but, beyond that, he doesn’t know what happened to them, as the balloons lacked tracking equipment. Maybe they made it to the stratosphere and released their payload, maybe they didn’t.

[…]

Iseman and Make Sunsets claim that a single gram of sulfur aerosols counteracts the warming effects of one ton of CO2. But there is no clear scientific basis for such an assertion, geoengineering researcher Shuchi Talati told the outlet. And so the $10 “cooling credits” the company is hawking are likely bunk (along with most carbon credit/offset schemes.)

Even if the balloons made it to the stratosphere, the small amount of sulfur released wouldn’t be enough to trigger significant environmental effects, said David Keith to MIT Technology Review.

[…]

The solution to climate change is almost certainly not a single maverick “disrupting” the composition of Earth’s stratosphere. But that hasn’t stopped Make Sunsets from reportedly raising nearly $750,000 in funds from venture capital firms. And for just ~$29,250,000 more per year, the company claims it can completely offset current warming. It’s not a bet we recommend taking.

Source: Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services

For the last thirteen years the Free Software Foundation has published its Ethical Tech Giving Guide. But what’s interesting is this year’s guide also tags companies and products with negative recommendations to “stay away from.” Stay away from: iPhones
It’s not just Siri that’s creepy: all Apple devices contain software that’s hostile to users. Although they claim to be concerned about user privacy, they don’t hesitate to put their users under surveillance.

Apple prevents you from installing third-party free software on your own phone, and they use this control to censor apps that compete with or subvert Apple’s profits.

Apple has a history of exploiting their absolute control over their users to silence political activists and help governments spy on millions of users.

Stay away from: M1 MacBook and MacBook Pro
macOS is proprietary software that restricts its users’ freedoms.

In November 2020, macOS was caught alerting Apple each time a user opens an app. Even though Apple is making changes to the service, it just goes to show how bad they try to be until there is an outcry.

Comes crawling with spyware that rats you out to advertisers.

Stay away from: Amazon
Amazon is one of the most notorious DRM offenders. They use this Orwellian control over their devices and services to spy on users and keep them trapped in their walled garden.

Be aware that Amazon isn’t the peddler of ebook DRM. Disturbingly, it’s enthusiastically supported by most of the big publishing houses.

Read more about the dangers of DRM through our Defective by Design campaign.

Stay away from: Spotify, Apple Music, and all other major streaming services
In addition to streaming music encumbered by DRM, people who want to use Spotify are required to install additional proprietary software. Even Spotify’s client for GNU/Linux relies on proprietary software.

Apple Music is no better, and places heavy restrictions on the music streamed through the platform.

Stay away from: Netflix
Netflix is continuing its disturbing trend of making onerous DRM the norm for streaming media. That’s why they were a target for last year’s International Day Against DRM (IDAD).

They’re also leveraging their place in the Motion Picture Association of America (MPAA) to advocate for tighter restrictions on users, and drove the effort to embed DRM into the fabric of the Web.

“In your gift giving this year, put freedom first,” their guide begins.

And for a freedom-respecting last-minute gift idea, they suggest giving the gift of a FSF membership (which comes with a code and a printable page “so that you can present your gift as a physical object, if you like.”) The membership is valid for one year, and includes the many benefits that come with an FSF associate membership, including a USB member card, email forwarding, access to our Jitsi Meet videoconferencing server and member forum, discounts in the FSF shop and on ThinkPenguin hardware, and more.

If you are in the United States, your gift would also be fully tax-deductible in the USA.

Source: FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services – Slashdot

Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – finally. What lawsuits lie in wait?

The version of the iconic character from “Steamboat Willie” will enter the public domain in 2024. But those trying to take advantage could end up in a legal mousetrap. From a report: There is nothing soft and cuddly about the way Disney protects the characters it brings to life. This is a company that once forced a Florida day care center to remove an unauthorized Minnie Mouse mural. In 2006, Disney told a stonemason that carving Winnie the Pooh into a child’s gravestone would violate its copyright. The company pushed so hard for an extension of copyright protections in 1998 that the result was derisively nicknamed the Mickey Mouse Protection Act. For the first time, however, one of Disney’s marquee characters — Mickey himself — is set to enter the public domain.

“Steamboat Willie,” the 1928 short film that introduced Mickey to the world, will lose copyright protection in the United States and a few other countries at the end of next year, prompting fans, copyright experts and potential Mickey grabbers to wonder: How is the notoriously litigious Disney going to respond? “I’m seeing in Reddit forums and on Twitter where people — creative types — are getting excited about the possibilities, that somehow it’s going to be open season on Mickey,” said Aaron J. Moss, a partner at Greenberg Glusker in Los Angeles who specializes in copyright and trademark law. “But that is a misunderstanding of what is happening with the copyright.” The matter is more complicated than it appears, and those who try to capitalize on the expiring “Steamboat Willie” copyright could easily end up in a legal mousetrap. “The question is where Disney tries to draw the line on enforcement,” Mr. Moss said, “and if courts get involved to draw that line judicially.”

Only one copyright is expiring. It covers the original version of Mickey Mouse as seen in “Steamboat Willie,” an eight-minute short with little plot. This nonspeaking Mickey has a rat-like nose, rudimentary eyes (no pupils) and a long tail. He can be naughty. In one “Steamboat Willie” scene, he torments a cat. In another, he uses a terrified goose as a trombone. Later versions of the character remain protected by copyrights, including the sweeter, rounder Mickey with red shorts and white gloves most familiar to audiences today. They will enter the public domain at different points over the coming decades. “Disney has regularly modernized the character, not necessarily as a program of copyright management, at least initially, but to keep up with the times,” said Jane C. Ginsburg, an authority on intellectual property law who teaches at Columbia University.

Source: Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – Slashdot

How it’s remotely possible that a company is capitalising on a thought someone had around 100 years ago is beyond me.

Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit

It’s been four years since Facebook became embroiled in its biggest scandal to date: Cambridge Analytica. In addition to paying $4.9 billion to the Federal Trade Commission in a settlement, the social network has just agreed to pay $725 million to settle a long-running class-action lawsuit, making it the biggest settlement ever in a privacy case.

To recap, a whistleblower revealed in 2018 that now-defunct British political consulting firm Cambridge Analytica harvested the personal data of almost 90 million users without their consent for targeted political ads during the 2016 US presidential campaign and the UK’s Brexit referendum.

The controversy led to Mark Zuckerberg testifying before congress, a $4.9 billion fine levied on the company by the FTC in July 2019, and a $100 million settlement with the US Securities and Exchange Commission. There was also a class-action lawsuit filed in 2018 on behalf of Facebook users who alleged the company violated consumer privacy laws by sharing private data with other firms.

Facebook parent Meta settled the class action in August, thereby ensuring CEO Mark Zuckerberg, chief operating officer Javier Oliva, and former COO Sheryl Sandberg avoided hours of questioning from lawyers while under oath

[…]

This doesn’t mark the end of Meta’s dealings with the Cambridge Analytica fallout. Zuckerberg is facing a lawsuit from Washington DC’s attorney general Karl A. Racine over allegations that the Meta boss was personally involved in failures that led to the incident and his “policies enabled a multi-year effort to mislead users about the extent of Facebook’s wrongful conduct.”

Source: Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit | TechSpot