Google’s wants Go reporting telemetry data by default

Russ Cox, a Google software engineer steering the development of the open source Go programming language, has presented a possible plan to implement telemetry in the Go toolchain.

However many in the Go community object because the plan calls for telemetry by default.

These alarmed developers would prefer an opt-in rather than an opt-out regime, a position the Go team rejects because it would ensure low adoption and would reduce the amount of telemetry data received to the point it would be of little value.

Cox’s proposal summarized lengthier documentation in three blog posts.

Telemetry, as Cox describes it, involves software sending data from Go software to a server to provide information about which functions are being used and how the software is performing. He argues it is beneficial for open source projects to have that information to guide development.

“I believe that open-source software projects need to explore new telemetry designs that help developers get the information they need to work efficiently and effectively, without collecting invasive traces of detailed user activity,” he wrote.

[…]

Some people believe they have a right to privacy, to be left alone, and to demand that their rights are respected through opt-in consent.

As developer Louis Thibault put it, “The Go dev team seems not to have internalized the principle of affirmative consent in matters of data collection.”

Others, particularly in the ad industry, but in other endeavors as well, see opt-in as an existential threat. They believe that they have a right to gather data and that it’s better to seek forgiveness via opt-out than to ask for permission unlikely to be given via opt-in.

Source: Google’s Go may add telemetry reporting that’s on by default • The Register

Windows 11 Sends Tremendous Amount of User Data to Third Parties – pretty much spyware for loads of people!

Many programs collect user data and send it back to their developers to improve software or provide more targeted services. But according to the PC Security Channel (via Neowin (opens in new tab)) Microsoft’s Windows 11 sends data not only to the Redmond, Washington-based software giant, but also to multiple third parties.

To analyze DNS traffic generated by a freshly installed copy of Windows 11 on a brand-new notebook, the PC Security Channel used the Wireshark network protocol analyzer that reveals precisely what is happening on a network. The results were astounding enough for the YouTube channel to call Microsoft’s Windows 11 “spyware.”

As it turned out, an all-new Windows 11 PC that was never used to browse the Internet contacted not only Windows Update, MSN and Bing servers, but also Steam, McAfee, geo.prod.do, and Comscore ScorecardResearch.com. Apparently, the latest operating system from Microsoft collected and sent telemetry data to various market research companies, advertising services, and the like.

To prove the point, the PC Security Channel tried to find out what Windows XP contacted after a fresh install using the same tool and it turned out that the only things that the 20+ years old operating system contacted were Windows Update and Microsoft Update servers.

“As with any modern operating system, users can expect to see data flowing to help them remain secure, up to date, and keep the system working as anticipated,” a Microsoft spokesperson told Tom’s Hardware. “We are committed to transparency and regularly publish information about the data we collect to empower customers to be more informed about their privacy.”

Some of the claims may be, technically, overblown. Telemetry data is mentioned in Windows’ terms of service, which many people skip over to use the operating system. And you can choose not to enable at least some of this by turning off settings the first time to boot into the OS.

“By accepting this agreement and using the software you agree that Microsoft may collect, use, and disclose the information as described in the Microsoft Privacy Statement (aka.ms/privacy), and as may be described in the user interface associated with the software features,” the terms of service read (opens in new tab). It also points out that some data-sharing settings can be turned off.

Obviously, a lot has changed in 20 years and we now use more online services than back in the early 2000s. As a result, various telemetry data has to be sent online to keep certain features running. But at the very least, Microsoft should do a better job of expressly asking for consent and stating what will be sent and where, because you can’t opt out of all of the data-sharing “features.” The PC Security Channel warns that even when telemetry tracking is disabled by third-party utilities, Windows 11 still sends certain data.

Source: Windows 11 Sends Tremendous Amount of User Data to Third Parties, YouTuber Claims (Update) | Tom’s Hardware

Just when you thought Microsoft was the good guys again and it was all Google, Apple, Amazon, Meta/Facebook being evil they are back at it to prove they still have it!

Microsoft won’t access private data in Office version scan installed as OS update they say

Microsoft wants everyone to know that it isn’t looking to invade their privacy while looking through their Windows PCs to find out-of-date versions of Office software.

In its KB5021751 update last month, Microsoft included a plan to scan Windows systems to smoke out those Office versions that are no longer supported or nearing the end of support. Those include Office 2007 (which saw support end in 2017) and Office 2010 (in 2020) and the 2013 build (this coming April).

The company stressed that it would run only one time and would not install anything on the user’s Windows system, adding that the file for the update is scanned to ensure it’s not infected by malware and is stored on highly secure servers to prevent unauthorized changes to it.

The update caused some discussion among users, at least enough to convince Microsoft to make another pitch that it is respecting user privacy and won’t access private data despite scanning their systems.

The update collects diagnostic and performance data so that it can determine the use of various versions of Office and how to best support and service them, the software maker wrote in an expanded note this week. The update will silently run once to collect the data and no files are left on the user’s systems once the scan is completed.

“This data is gathered from registry entries and APIs,” it wrote. “The update does not gather licensing details, customer content, or data about non-Microsoft products. Microsoft values, protects, and defends privacy.”

[…]

Source: Microsoft won’t access private data in Office version scan • The Register

Of course, just sending data about what version of Office is installed is in fact sending private data about stuff installed on your PC. This is Not OK.

Claims Datadog asked developer to kill open source data tool, which he did. And now he’s ressurected it.

After a delay of over a year, an open source code contribution to enable the export of data from Datadog’s Application Performance Monitoring (APM) platform finally got merged on Tuesday into a collection of OpenTelemetry components.

The reason for the delay, according to John Dorman, the software developer who wrote the Datadog APM Receiver code, is that, about a year ago, Datadog asked him not to submit the software.

On February 8 last year Dorman, who goes by the name “boostchicken” on GitHub, announced that he was closing his pull request – the git term for programming code contributed to a project.

“After some consideration I’ve decided to close this PR [pull request],” he wrote. “[T]here are better ways to OTEL [OpenTelemetry] support w/ Datadog.”

Members of the open source community who are focused on application monitoring – collecting and analyzing logs, traces of app activity, and other metrics that can be useful to keep applications running – had questions, claiming that DataDog prefers to lock customers into their product.

Shortly after the post, Charity Majors, CEO of Honeycomb.io, a rival application monitoring firm, wrote a Twitter thread elaborating on the benefits of OpenTelemetry and calling out Datadog for only supporting OTEL as a one-way street.

“Datadog has been telling users they can use OTEL to get data in, but not get data out,” Majors wrote. “The Datadog OTEL collector PR was silently killed. The person who wrote it appears to have been pressured into closing it, and nothing has been proposed to replace it.”

Behavior of this sort would be inconsistent with the goals of the Cloud Native Computing Foundation’s (CNCF) OpenTelemetry project, which seeks “to provide a set of standardized vendor-agnostic SDKs, APIs, and tools for ingesting, transforming, and sending data to an Observability back-end (i.e. open source or commercial vendor).”

That is to say, the OpenTelemetry project aims to promote data portability, instead of hindering it, as is common among proprietary software vendors.

The smoking hound

On January 26 Dorman confirmed suspicions that he had been approached by Datadog and asked not to proceed with his efforts.

“I owe the community an apology on this one,” Dorman wrote in his pull request thread. “I lacked the courage of my convictions and when push came to shove and I had to make the hard choice, I took the easy way out.”

“Datadog ‘asked’ me to kill this pull request. There were other members from my organization present that let me know this answer will be a ‘ok’. I am sure I could have said no, at the moment I just couldn’t fathom opening Pandora’s Box. There you have it, no NDA, no stack of cash. I left the code hoping someone could carry on. I was willing to give [Datadog] this code, no strings attached as long as it moved OTel forward. They declined.”

He added, “However, I told them if you don’t support OpenTelemetry in a meaningful way, I will start sending pull requests again. So here we are. I feel I have given them enough time to do the right thing.”

Indeed, Dorman subsequently re-opened his pull request, which on Tuesday was merged into the repository for Open Telemetry Collector components. His Datadog ARM Receiver can ingest traces in the Datadog Trace Agent Format.

Coincidentally, Datadog on Tuesday published a blog post titled, “Datadog’s commitment to OpenTelemetry and the open source community.” It makes no mention of the alleged request to “kill [the] pull request.” Instead, it enumerates various ways in which the company has supported OpenTelemetry recently.

The Register asked Datadog for comment. We’ve not heard back.

Dorman, who presently works for Meta, did not respond to a request for comment. However, last week, via Twitter, he credited Grafana, an open source Datadog competitor, for having “formally sponsored” the work and for pointing out that Datadog “refuses to support OTEL in meaningful ways.”

The OpenTelemetry Governance Committee for the CNCF provided The Register with the following statement:

“We’re still trying to make sense of what happened here; we’ll comment on it once we have a full understanding. Regardless, we are happy to review and accept any contributions which push the project forward, and this [pull request] was merged yesterday,” it said.

Source: Claims Datadog asked developer to kill open source data tool • The Register

FTC Fines GoodRx $1.5M for Sending Medication Data to Facebook, Google, others

The Federal Trade Commission took historic action against the medication discount service GoodRx Wednesday, issuing a $1.5 million fine against the company for sharing data about users’ prescriptions with Facebook, Google, and others. It’s a move that could usher in a new era of health privacy in the United States.

“Digital health companies and mobile apps should not cash in on consumer’s extremely sensitive and personally identifiable health information,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement.

[…]

In addition to a fine, GoodRx has agreed to a first-of-its-kind provision banning the company from sharing health data with third parties for advertising purposes. That may sound unsurprising, but many consumers don’t realize that health privacy laws generally don’t apply to companies that aren’t affiliated with doctors or insurance companies.

[…]

GoodRx is a health technology company that gives out free coupons for discounts on common medications. The company also connects users with healthcare providers for telehealth visits. GoodRx also shared data about the prescriptions you’re buying and looking up with third-party advertising companies, which incurred the ire of the FTC.

GoodRx’s privacy problems were first uncovered by this reporter in an investigation with Consumer Reports, followed by a similar report in Gizmodo. At the time, if you looked up Viagra, Prozac, PrEP, or any other medication, GoodRx would tell Facebook, Google, and a variety of companies in the ad business, such as Criteo, Branch, and Twilio. GoodRx wasn’t selling the data. Instead, it shared the information so those companies could help GoodRx target its own customers with ads for more drugs.

[…]

Source: FTC Fines GoodRx $1.5M for Sending Medication Data to Facebook

An AI robot lawyer was set to argue in court. Scared lawyers shut it down with jail threats

A British man who planned to have a “robot lawyer” help a defendant fight a traffic ticket has dropped the effort after receiving threats of possible prosecution and jail time.

Joshua Browder, the CEO of the New York-based startup DoNotPay, created a way for people contesting traffic tickets to use arguments in court generated by artificial intelligence.

Here’s how it was supposed to work: The person challenging a speeding ticket would wear smart glasses that both record court proceedings and dictate responses into the defendant’s ear from a small speaker. The system relied on a few leading AI text generators, including ChatGPT and DaVinci.

The first-ever AI-powered legal defense was set to take place in California on Feb. 22, but not anymore.

As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in.

“Multiple state bars have threatened us,” Browder said. “One even said a referral to the district attorney’s office and prosecution and prison time would be possible.”

In particular, Browder said one state bar official noted that the unauthorized practice of law is a misdemeanor in some states punishable up to six months in county jail.

“Even if it wouldn’t happen, the threat of criminal charges was enough to give it up,” he said. “The letters have become so frequent that we thought it was just a distraction and that we should move on.”

State bar organizations license and regulate attorneys, as a way to ensure people hire lawyers who understand the law.

Browder refused to cite which state bar in particular sent letters, and what official made the threat of possible prosecution, saying his startup, DoNotPay, is under investigation by multiple state bars, including California’s.

[…]

“The truth is, most people can’t afford lawyers,” he said. “This could’ve shifted the balance and allowed people to use tools like ChatGPT in the courtroom that maybe could’ve helped them win cases.”

The future of robot lawyers faces uncertainty for another reason that is far simpler than the bar officials’ existential questions: courtroom rules.

Recording audio during a live legal proceeding is not permitted in federal court and is often prohibited in state courts. The AI tools developed by DoNotPay, which remain completely untested in actual courtrooms, require recording audio of arguments in order for the machine-learning algorithm to generate responses.

“I think calling the tool a ‘robot lawyer’ really riled a lot of lawyers up,” Browder said. “But I think they’re missing the forest for the trees. Technology is advancing and courtroom rules are very outdated.”

 

Source: An AI robot lawyer was set to argue in court. Real lawyers shut it down. : NPR

Lawyers protecting their own at the cost of the population? Who’d have thunk it?

Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator

Meta’s (META.O) WhatsApp subsidiary was fined 5.5 million euros ($5.95 million) on Thursday by Ireland’s Data Privacy Commissioner (DPC), its lead EU privacy regulator, for an additional breach of the bloc’s privacy laws.

The DPC also told WhatsApp to reassess how it uses personal data for service improvements following a similar order it issued this month to Meta’s other main platforms, Facebook and Instagram, which stated Meta must reassess the legal basis upon which it targets advertising through the use of personal data.

[…]

Source: Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator | Reuters

Google Accused of Creating Digital Ad Monopoly in New Justice Dept. Suit

The Department of Justice filed a lawsuit against Google Tuesday, accusing the tech giant of using its market power to create a monopoly in the digital advertising business over the course of 15 years.

Google “corrupted legitimate competition in the ad tech industry by engaging in a systematic campaign to seize control of the wide swath of high-tech tools used by publishers, advertisers and brokers, to facilitate digital advertising,” the Justice Department alleges. Eight state attorneys general joined in the suit, filed in Virginia federal court. Google has faced five antitrust suits since 2020.

[…]

Source: Google Accused of Digital Ad Monopoly in New Justice Dept. Suit

Indian Android Users Can Finally Use Alternate Search and Payment Methods and forked Google apps

Android users in India will soon have more control over their devices, thanks to a court ruling. Beginning next month, Indian Android wielders can choose a different billing system when paying for apps and in-app smartphone purchases rather than default to going through the Play Store. Google will also allow Indian users to select a different search engine as their default right as they set up a new device, which might have implications for upcoming EU regulations.

The move comes after a ruling last week by India’s Supreme Court. The trial started late last year when the Competition Commission of India (CCI) fined Google $161 million for imposing restrictions on its manufacturing partners. Google attempted to challenge the order by maintaining this kind of practice would stall the Android ecosystem and that “no other jurisdiction has ever asked for such far-reaching changes.”

[…]

Google also won’t be able to require the installation of its branded apps to grant the license for running Android OS anymore. From now on, device manufacturers in India will be able to license “individual Google apps” as they like for pre-installation rather than needing to bundle the whole kit and caboodle. Google is also updating the Android compatibility requirements for its OEM partners to “build non-compatible or forked variants.”

[…]

Of particular note is seeing how users will react to being able to choose whether to buy apps and other in-app purchases through the Play Store, where Google takes a 30% cut from each transaction, or through an alternative billing service like JIO Money or Paytm—or even Amazon Pay, available in India.

[…]

The Department of Justice in the United States is also suing Google’s parent company, Alphabet, for a second time this week for practices within its digital advertising business, alleging that the company “corrupted legitimate competition in the ad tech industry” to build out its monopoly.

Source: Indian Android Users Can Finally Use Alternate Search and Payment Methods

US law enforcement has warrantless access to many money transfers

Your international money transfers might not be as discreet as you think. Senator Ron Wyden and The Wall Street Journal have learned that US law enforcement can access details of money transfers without a warrant through an obscure surveillance program the Arizona attorney general’s office created in 2014. A database stored at a nonprofit, the Transaction Record Analysis Center (TRAC), provides full names and amounts for larger transfers (above $500) sent between the US, Mexico and 22 other regions through services like Western Union, MoneyGram and Viamericas. The program covers data for numerous Caribbean and Latin American countries in addition to Canada, China, France, Malaysia, Spain, Thailand, Ukraine and the US Virgin Islands. Some domestic transfers also enter the data set.

[…]

The concern, of course, is that officials can obtain sensitive transaction details without court oversight or customers’ knowledge. An unscrupulous officer could secretly track large transfers. Wyden adds that the people in the database are more likely to be immigrants, minorities and low-income residents who don’t have bank accounts and already have fewer privacy protectoins. The American Civil Liberties Union also asserts that the subpoenas used to obtain this data violate federal law. Arizona issued at least 140 of these subpoenas between 2014 and 2021.

[…]

Source: US law enforcement has warrantless access to many money transfers | Engadget

Meta sues surveillance company for allegedly scraping more than 600,000 accounts – pots and kettles

Meta has filed a lawsuit against Voyager Labs, which it has accused of creating tens of thousands of fake accounts to scrape data from more than 600,000 Facebook users’ profiles. It says the surveillance company pulled information such as posts, likes, friend lists, photos, and comments, along with other details from groups and pages. Meta claims that Voyager masked its activity using its Surveillance Software, and that the company has also scraped data from Instagram, Twitter, YouTube, LinkedIn and Telegram to sell and license for profit.

In the complaint, which was obtained by Gizmodo, Meta has asked a judge to permanently ban Voyager from Facebook and Instagram. “As a direct result of Defendant’s unlawful actions, Meta has suffered and continues to suffer irreparable harm for which there is no adequate remedy at law, and which will continue unless Defendant’s actions are enjoined,” the filing reads. Meta said Voyager’s actions have caused it “to incur damages, including investigative costs, in an amount to be proven at trial.”

Meta claims that Voyager scraped data from accounts belonging to “employees of non-profit organizations, universities, news media organizations, healthcare facilities, the armed forces of the United States, and local, state, and federal government agencies, as well as full-time parents, retirees, and union members.” The company noted in a blog post it disabled accounts linked to Voyager and that filed the suit to enforce its terms and policies.

[…]

In 2021, The Guardian reported that the Los Angeles Police Department had tested Voyager’s social media surveillance tools in 2019. The company is said to have told the department that police could use the software to track the accounts of a suspect’s friends on social media, and that the system could predict crimes before they took place by making assumptions about a person’s activity.

According to The Guardian, Voyager has suggested factors like Instagram usernames denoting Arab pride or tweeting about Islam could indicate someone is leaning toward extremism. Other companies, such as Palantir, have worked on predictive policing tech. Critics such as the Electronic Frontier Foundation claim that tech can’t predict crime and that algorithms merely perpetuate existing biases.

Data scraping is an issue that Meta has to take seriously. In 2021, it sued an individual for allegedly scraping data on more than 178 million users. Last November, the Irish Data Protection Commission fined the company €265 million ($277 million) for failing to stop bad actors from obtaining millions of people’s phone numbers and other data, which were published elsewhere online. The regulator said Meta failed to comply with GDPR data protection rules.

Source: Meta sues surveillance company for allegedly scraping more than 600,000 accounts | Engadget

Google will pay $9.5 million to settle Washington DC AG’s location-tracking lawsuit

Google has agreed to pay $9.5 million to settle a lawsuit brought by Washington DC Attorney General Karl Racine, who accused the company earlier this year of “deceiving users and invading their privacy.” Google has also agreed to change some of its practices, primarily concerning how it informs users about collecting, storing and using their location data.

“Google leads consumers to believe that consumers are in control of whether Google collects and retains information about their location and how that information is used,” the complaint, which Racine filed in January, read. “In reality, consumers who use Google products cannot prevent Google from collecting, storing and profiting from their location.”

Racine’s office also accused Google of employing “dark patterns,” which are design choices intended to deceive users into carrying out actions that don’t benefit them. Specifically, the AG’s office claimed that Google repeatedly prompted users to switch in location tracking in certain apps and informed them that certain features wouldn’t work properly if location tracking wasn’t on. Racine and his team found that location data wasn’t even needed for the app in question. They asserted that Google made it “impossible for users to opt out of having their location tracked.”

 

The $9.5 million payment is a paltry one for Google. Last quarter, it took parent company Alphabet under 20 minutes to make that much in revenue. The changes that the company will make to its practices as part of the settlement may have a bigger impact.

Folks who currently have certain location settings on will receive notifications telling them how they can disable each setting, delete the associated data and limit how long Google can keep that information. Users who set up a new Google account will be informed which location-related account settings are on by default and offered the chance to opt out.

Google will need to maintain a webpage that details its location data practices and policies. This will include ways for users to access their location settings and details about how each setting impacts Google’s collection, retention or use of location data.

Moreover, Google will be prevented from sharing a person’s precise location data with a third-party advertiser without the user’s explicit consent. The company will need to delete location data “that came from a device or from an IP address in web and app activity within 30 days” of obtaining the information

[…]

Source: Google will pay $9.5 million to settle Washington DC AG’s location-tracking lawsuit | Engadget

Spy Tech Palantir’s Covid-era UK health contract extended without public consultation or competition

NHS England has extended its contract with US spy-tech biz Palantir for the system built at the height of the pandemic to give it time to resolve the twice-delayed procurement of a data platform to support health service reorganization and tackle the massive care backlog.

The contract has already been subject to the threat of a judicial review, after which NHS England – a non-departmental government body – agreed to three concessions, including the promise of public consultation before extending the contract.

Campaigners and legal groups are set to mount legal challenges around separate, but related, NHS dealing with Palantir.

In a notice published yesterday, the NHS England said the contract would be extended until September 2023 in a deal worth £11.5 million ($13.8 million).

NHS England has been conducting a £360 million ($435 million) procurement of a separate, but linked, Federated Data Platform (FDP), a deal said to be a “must-win” for Palantir, a US data management company which cut its teeth working for the CIA and controversial US immigration agency ICE.

The contract notice for FDP, which kicks off the official competition, was originally expected in June 2022 but was delayed until September 2022, when NHS England told The Register it would be published. The notice has yet to appear

[…]

Source: Palantir’s Covid-era UK health contract extended • The Register

Apple Faces French $8.5M Fine For Illegal Data Harvesting

France’s data protection authority, CNIL, fined Apple €8 million (about $8.5 million) Wednesday for illegally harvesting iPhone owners’ data for targeted ads without proper consent.

[…]

The French fine, though, is the latest addition to a growing body of evidence that Apple may not be the privacy guardian angel it makes itself out to be.

[…]

Apple failed to “obtain the consent of French iPhone users (iOS 14.6 version) before depositing and/or writing identifiers used for advertising purposes on their terminals,” the CNIL said in a statement. The CNIL’s fine calls out the search ads in Apple’s App Store, specifically. A French court fined the company over $1 million in December over its commercial practices related to the App Store.

[…]

Eight million euros is peanuts for a company that makes billions a year on advertising alone and is so inconceivably wealthy that it had enough money to lose $1 trillion in market value last year—making Apple the second company in history to do so. The fine could have been higher but for the fact that Apple’s European headquarters are in Ireland, not France, giving the CNIL a smaller target to go after.

Still, its a signal that Apple may face a less friendly regulatory future in Europe. Commercial authorities are investigating Apple for anti-competitive business practices, and are even forcing the company to abandon its proprietary charging cable in favor of USB-C ports.

Source: Apple Faces Rare $8.5M Fine For Illegal Data Harvesting

John Deere signs right to repair agreement

As farming has become more technology-driven, Deere has increasingly injected software into its products with all of its tractors and harvesters now including an autopilot feature as standard.

There is also the John Deere Operations Center, which “instantly captures vital operational data to boost transparency and increase productivity for your business.”

Within a matter of years, the company envisages having 1.5 million machines and half a billion acres of land connected to the cloud service, which will “collect and store crop data, including millions of images of weeds that can be targeted by herbicide.”

Deere also estimates that software fees will make up 10 percent of the company’s revenues by the end of the decade, with Bernstein analysts pegging the average gross margin for farming software at 85 percent, compared to 25 percent for equipment sales.

Just like other commercial software vendors, however, Deere exercises close control and restricts what can be done with its products. This led farm labor advocacy groups to file a complaint to the US Federal Trade Commission last year, claiming that Deere unlawfully refused to provide the software and technical data necessary to repair its machinery.

“Deere is the dominant force in the $68 billion US agricultural equipment market, controlling over 50 per cent of the market for large tractors and combines,” said Fairmark Partners, the groups’ attorneys, in a preface to the complaint [PDF].

“For many farmers and ranchers, they effectively have no choice but to purchase their equipment from Deere. Not satisfied with dominating just the market for equipment, Deere has sought to leverage its power in that market to monopolize the market for repairs of that equipment, to the detriment of farmers, ranchers, and independent repair providers.”

[…]

The MoU, which can be read here [PDF], was signed yesterday at the 2023 AFBF Convention in San Juan, Puerto Rico, and seems to be a commitment by Deere to improve farmers’ access and choice when it comes to repairs.

[…]

Duvall said on a podcast about the matter that the MoU is the result of several years’ work. “As you use equipment, we all know at some point in time, there’s going to be problems with it. And we did have problems with having the opportunity to repair our equipment where we wanted to, or even repair it on the farm,” he added.

“It ensures that our farmers can repair their equipment and have access to the diagnostic tools and product guides so that they can find the problems and find solutions for them. And this is the beginning of a process that we think is going to be real healthy for our farmers and for the company because what it does is it sets up an opportunity for our farmers to really work with John Deere on a personal basis.”

[…]

Source: John Deere signs right to repair agreement • The Register

But… still gives John Deere access to their data for free?

This may also have something to do with the security of John Deere machines being so incredibly piss poor, mainly due to really bad update hygiene

Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

A startup says it has begun releasing sulfur particles into Earth’s atmosphere, in a controversial attempt to combat climate change by deflecting sunlight. Make Sunsets, a company that sells carbon offset “cooling credits” for $10 each, is banking on solar geoengineering to cool down the planet and fill its coffers. The startup claims it has already released two test balloons, each filled with about 10 grams of sulfur particles and intended for the stratosphere, according to the company’s website and first reported on by MIT Technology Review.

The concept of solar geoengineering is simple: Add reflective particles to the upper atmosphere to reduce the amount of sunlight that penetrates from space, thereby cooling Earth. It’s an idea inspired by the atmospheric side effects of major volcanic eruptions, which have led to drastic, temporary climate shifts multiple times throughout history, including the notorious “year without a summer” of 1816.

Yet effective and safe implementation of the idea is much less simple. Scientists and engineers have been studying solar geoengineering as a potential climate change remedy for more than 50 years. But almost nobody has actually enacted real-world experiments because of the associated risks, like rapid changes in our planet’s precipitation patterns, damage to the ozone layer, and significant geopolitical ramifications.

[…]

if and when we get enough sulfur into the atmosphere to meaningfully cool Earth, we’d have to keep adding new particles indefinitely to avoid entering an era of climate change about four to six times worse than what we’re currently experiencing, according to one 2018 study. Sulfur aerosols don’t stick around very long. Their lifespan in the stratosphere is somewhere between a few days and a couple years, depending on particle size and other factors.

[…]

Rogue agents independently deciding to impose geoengineering on the rest of us has been a concern for as long as the thought of intentionally manipulating the atmosphere has been around. The Pentagon even has dedicated research teams working on methods to detect and combat such clandestine attempts. But effectively defending against solar geoengineering is much more difficult than just doing it.

In Iseman’s rudimentary first trials, he says he released two weather balloons full of helium and sulfur aerosols somewhere in Baja California, Mexico. The founder told MIT Technology Review that the balloons rose toward the sky but, beyond that, he doesn’t know what happened to them, as the balloons lacked tracking equipment. Maybe they made it to the stratosphere and released their payload, maybe they didn’t.

[…]

Iseman and Make Sunsets claim that a single gram of sulfur aerosols counteracts the warming effects of one ton of CO2. But there is no clear scientific basis for such an assertion, geoengineering researcher Shuchi Talati told the outlet. And so the $10 “cooling credits” the company is hawking are likely bunk (along with most carbon credit/offset schemes.)

Even if the balloons made it to the stratosphere, the small amount of sulfur released wouldn’t be enough to trigger significant environmental effects, said David Keith to MIT Technology Review.

[…]

The solution to climate change is almost certainly not a single maverick “disrupting” the composition of Earth’s stratosphere. But that hasn’t stopped Make Sunsets from reportedly raising nearly $750,000 in funds from venture capital firms. And for just ~$29,250,000 more per year, the company claims it can completely offset current warming. It’s not a bet we recommend taking.

Source: Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services

For the last thirteen years the Free Software Foundation has published its Ethical Tech Giving Guide. But what’s interesting is this year’s guide also tags companies and products with negative recommendations to “stay away from.” Stay away from: iPhones
It’s not just Siri that’s creepy: all Apple devices contain software that’s hostile to users. Although they claim to be concerned about user privacy, they don’t hesitate to put their users under surveillance.

Apple prevents you from installing third-party free software on your own phone, and they use this control to censor apps that compete with or subvert Apple’s profits.

Apple has a history of exploiting their absolute control over their users to silence political activists and help governments spy on millions of users.

Stay away from: M1 MacBook and MacBook Pro
macOS is proprietary software that restricts its users’ freedoms.

In November 2020, macOS was caught alerting Apple each time a user opens an app. Even though Apple is making changes to the service, it just goes to show how bad they try to be until there is an outcry.

Comes crawling with spyware that rats you out to advertisers.

Stay away from: Amazon
Amazon is one of the most notorious DRM offenders. They use this Orwellian control over their devices and services to spy on users and keep them trapped in their walled garden.

Be aware that Amazon isn’t the peddler of ebook DRM. Disturbingly, it’s enthusiastically supported by most of the big publishing houses.

Read more about the dangers of DRM through our Defective by Design campaign.

Stay away from: Spotify, Apple Music, and all other major streaming services
In addition to streaming music encumbered by DRM, people who want to use Spotify are required to install additional proprietary software. Even Spotify’s client for GNU/Linux relies on proprietary software.

Apple Music is no better, and places heavy restrictions on the music streamed through the platform.

Stay away from: Netflix
Netflix is continuing its disturbing trend of making onerous DRM the norm for streaming media. That’s why they were a target for last year’s International Day Against DRM (IDAD).

They’re also leveraging their place in the Motion Picture Association of America (MPAA) to advocate for tighter restrictions on users, and drove the effort to embed DRM into the fabric of the Web.

“In your gift giving this year, put freedom first,” their guide begins.

And for a freedom-respecting last-minute gift idea, they suggest giving the gift of a FSF membership (which comes with a code and a printable page “so that you can present your gift as a physical object, if you like.”) The membership is valid for one year, and includes the many benefits that come with an FSF associate membership, including a USB member card, email forwarding, access to our Jitsi Meet videoconferencing server and member forum, discounts in the FSF shop and on ThinkPenguin hardware, and more.

If you are in the United States, your gift would also be fully tax-deductible in the USA.

Source: FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services – Slashdot

Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – finally. What lawsuits lie in wait?

The version of the iconic character from “Steamboat Willie” will enter the public domain in 2024. But those trying to take advantage could end up in a legal mousetrap. From a report: There is nothing soft and cuddly about the way Disney protects the characters it brings to life. This is a company that once forced a Florida day care center to remove an unauthorized Minnie Mouse mural. In 2006, Disney told a stonemason that carving Winnie the Pooh into a child’s gravestone would violate its copyright. The company pushed so hard for an extension of copyright protections in 1998 that the result was derisively nicknamed the Mickey Mouse Protection Act. For the first time, however, one of Disney’s marquee characters — Mickey himself — is set to enter the public domain.

“Steamboat Willie,” the 1928 short film that introduced Mickey to the world, will lose copyright protection in the United States and a few other countries at the end of next year, prompting fans, copyright experts and potential Mickey grabbers to wonder: How is the notoriously litigious Disney going to respond? “I’m seeing in Reddit forums and on Twitter where people — creative types — are getting excited about the possibilities, that somehow it’s going to be open season on Mickey,” said Aaron J. Moss, a partner at Greenberg Glusker in Los Angeles who specializes in copyright and trademark law. “But that is a misunderstanding of what is happening with the copyright.” The matter is more complicated than it appears, and those who try to capitalize on the expiring “Steamboat Willie” copyright could easily end up in a legal mousetrap. “The question is where Disney tries to draw the line on enforcement,” Mr. Moss said, “and if courts get involved to draw that line judicially.”

Only one copyright is expiring. It covers the original version of Mickey Mouse as seen in “Steamboat Willie,” an eight-minute short with little plot. This nonspeaking Mickey has a rat-like nose, rudimentary eyes (no pupils) and a long tail. He can be naughty. In one “Steamboat Willie” scene, he torments a cat. In another, he uses a terrified goose as a trombone. Later versions of the character remain protected by copyrights, including the sweeter, rounder Mickey with red shorts and white gloves most familiar to audiences today. They will enter the public domain at different points over the coming decades. “Disney has regularly modernized the character, not necessarily as a program of copyright management, at least initially, but to keep up with the times,” said Jane C. Ginsburg, an authority on intellectual property law who teaches at Columbia University.

Source: Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – Slashdot

How it’s remotely possible that a company is capitalising on a thought someone had around 100 years ago is beyond me.

Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit

It’s been four years since Facebook became embroiled in its biggest scandal to date: Cambridge Analytica. In addition to paying $4.9 billion to the Federal Trade Commission in a settlement, the social network has just agreed to pay $725 million to settle a long-running class-action lawsuit, making it the biggest settlement ever in a privacy case.

To recap, a whistleblower revealed in 2018 that now-defunct British political consulting firm Cambridge Analytica harvested the personal data of almost 90 million users without their consent for targeted political ads during the 2016 US presidential campaign and the UK’s Brexit referendum.

The controversy led to Mark Zuckerberg testifying before congress, a $4.9 billion fine levied on the company by the FTC in July 2019, and a $100 million settlement with the US Securities and Exchange Commission. There was also a class-action lawsuit filed in 2018 on behalf of Facebook users who alleged the company violated consumer privacy laws by sharing private data with other firms.

Facebook parent Meta settled the class action in August, thereby ensuring CEO Mark Zuckerberg, chief operating officer Javier Oliva, and former COO Sheryl Sandberg avoided hours of questioning from lawyers while under oath

[…]

This doesn’t mark the end of Meta’s dealings with the Cambridge Analytica fallout. Zuckerberg is facing a lawsuit from Washington DC’s attorney general Karl A. Racine over allegations that the Meta boss was personally involved in failures that led to the incident and his “policies enabled a multi-year effort to mislead users about the extent of Facebook’s wrongful conduct.”

Source: Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit | TechSpot

The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular

ou may have noticed the world getting excited about the capabilities of ChatGPT, a text-based AI chat bot. Similarly, some are getting quite worked up over generative AI systems that can turn text prompts into images, including those mimicking the style of particular artists. But less remarked upon is the use of AI in the world of music. Music Business Worldwide has written two detailed news stories on the topic. The first comes from China:

Tencent Music Entertainment (TME) says that it has created and released over 1,000 tracks containing vocals created by AI tech that mimics the human voice.

And get this: one of these tracks has already surpassed 100 million streams.

Some of these songs use synthetic voices based on human singers, both dead and alive:

TME also confirmed today (November 15) that – in addition to “paying tribute” to the vocals of dead artists via the Lingyin Engine – it has also created “an AI singer lineup with the voices of trending [i.e currently active] stars such as Yang Chaoyue, among others”.

The copyright industry will doubtless have something to say about that. It is also unlikely to be delighted by the second Music Business Worldwide story about AI-generated music, this time in the Middle East and North Africa (MENA) market:

MENA-focused Spotify rival, Anghami, is now taking the concept to a whole other level – claiming that it will soon become the first platform to host over 200,000 songs generated by AI.

Anghami has partnered with a generative music platform called Mubert, which says it allows users to create “unique soundtracks” for various uses such as social media, presentations or films using one million samples from over 4,000 musicians.

According to Mohammed Ogaily, VP Product at Anghami, the service has already “generated over 170,000 songs, based on three sets of lyrics, three talents, and 2,000 tracks generated by AI”.

It’s striking that the undoubtedly interesting but theoretical possibilities of ChatGPT and generative AI art are dominating the headlines, while we hear relatively little about these AI-based music services that are already up and running, and hugely popular with listeners. It’s probably a result of the generally parochial nature of mainstream Western media, which often ignores the important developments happening elsewhere.

Source: The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular | Techdirt

AI-Created Comic Has Copyright Protection Revoked by US

The United States Copyright Office (USCO) reversed an earlier decision to grant a copyright to a comic book that was created using “A.I. art,” and announced that the copyright protection on the comic book will be revoked, stating that copyrighted works must be created by humans to gain official copyright protection.

In September, Kris Kashtanova announced that they had received a U.S. copyright on his comic book, Zarya of the Dawn, a comic book inspired by their late grandmother that she created with the text-to-image engine Midjourney. Kashtanova referred to herself as a “prompt engineer” and explained at the time that she went to get the copyright so that she could “make a case that we do own copyright when we make something using AI.”

[…]

Source: AI-Created Comic Has Been Deemed Ineligible for Copyright Protection

I guess there is no big corporate interest in lobbying for AI created content – yet – and so the copyright masters have no idea what to do without their corporate cash carrying masters telling them what to do.

Epic Forced To Pay $520 Million Fine over Fortnite Privacy and Dark Patterns

Fortnite-maker Epic Games has agreed to pay a massive $520 million fine in settlements with the Federal Trade Commission for allegedly illegally gathering data from children and deploying dark patterns techniques to manipulate users into making unwanted in-game purchases. The fines mark a major regulatory win for the Biden administration’s progressive-minded FTC, who, up until now, had largely failed to deliver on its promise of more robust reinforcement of U.S. tech companies.

The first $275 million fine will settle allegations Epic collected personal information from children under the age of 13 without their parent’s consent when they played the hugely popular battle royale game. The FTC claims that unjustified data collection violates the Children’s Online Privacy Protection Act. Internal Epic surveys and the licensing of Fortnite branded toys, the FTC alleges, show Epic clearly knew at least some of its player base was underage. Worse still, the agency claims Epic forced parents to wade through cumbersome barriers when they requested to have their children’s data deleted.

[…]

The game-maker additionally agreed to pay $245 million to refund customers who the FTC says fell victim to manipulative, unfair billing practices that fall under the category, “dark patterns.” Fortnite allegedly deployed a, “counterintuitive, inconsistent, and confusing button configuration,” that led players to incur unwanted charges with a single press of a button. In some cases, the FTC claims that single press button meant users were charged while sitting in a loading screen or while trying to wake the game from sleep mode. Users, the complaint alleges, collectively lost hundreds of millions of dollars to those shady practices. Epic allegedly “ignored more than one million user complaints,” suggesting a high number of users were being wrongly charged.

[…]

And though the FTC’s latest fine is far cry from the $5 billion penalty the agency issued against Facebook in 2019 and represents just a portion of the billions Fortnite reportedly rakes in each year, supporters said it nonetheless represents more than a mere slap on the wrist.

[…]

Source: Epic Forced To Pay Record-Breaking $520 Million Fine

China’s Setting the Standard for Deepfake Regulation

[…]

On January 10, according to The South China Morning Post, China’s Cyberspace Administration will implement new rules that are intended to protect people from having their voice or image digitally impersonated without their consent. The regulators refer to platforms and services using the technology to edit a person’s voice or image as, “deep synthesis providers.”

Those deep synthesis technologies could include the use of deep learning algorithms and augmented reality to generate text, audio, images or video. We’ve already seen numerous instances over the years of these technologies used to impersonate high profile individuals, ranging from celebrities and tech executives to political figures.

Under the new guidelines, companies and technologists who use the technology must first contact and receive the consent from individuals before they edit their voice or image. The rules, officially called The Administrative Provisions on Deep Synthesis for Internet Information Services come in response to governmental concerns that advances in AI tech could be used by bad actors to run scams or defame people by impersonating their identity. In presenting the guidelines, the regulators also acknowledge areas where these technologies could prove useful. Rather than impose a wholesale ban, the regulator says it would actually promote the tech’s legal use and, “provide powerful legal protection to ensure and facilitate,” its development.

But, like many of China’s proposed tech policies, political considerations are inseparable. According to the South China Morning Post, news stories reposted using the technology must come from a government approved list of news outlets. Similarly, the rules require all so-called deep synthesis providers adhere to local laws and maintain “correct political direction and correct public opinion orientation.” Correct here, of course, is determined unilaterally by the state.

Though certain U.S states like New Jersey and Illinois have introduced local privacy legislation that addresses deepfakes, the lack of any meaningful federal privacy laws limits regulators’ abilities to address the tech on a national level. In the private sector, major U.S. platforms like Facebook and Twitter have created new systems meant to detect and flag deepfakes, though they are constantly trying to stay one step ahead of bad actors continually looking for ways to evade those filters.

If China’s new rules are successful, it could lay down a policy framework other nations could build upon and adapt. It wouldn’t be the first time China’s led the pack on strict tech reform. Last year, China introduced sweeping new data privacy laws that radically limited the ways private companies could collect an individual’s personal identity. Those rules were built off of Europe’s General Data Protection Regulation

[…]

That all sounds great, but China’s privacy laws have one glaring loophole tucked within it. Though the law protects people from private companies feeding off their data, it does almost nothing to prevent those same harms being carried out by the government. Similarly, with deepfakes, it’s unclear how the newly proposed regulations would, for instance, prohibit a state-run agency from doctoring or manipulating certain text or audio to influence the narrative around controversial or sensitive political events.

Source: China’s Setting the Standard for Deepfake Regulation

China is also the one setting the bar for anti-monopolistic practices, the EU and US have been caught with their fingers in the jam jar and their pants down.

Google must delete search results about you if they’re fake, EU court rules

People in Europe can get Google to delete search results about them if they prove the information is “manifestly inaccurate,” the EU’s top court ruled Thursday.

The case kicked off when two investment managers requested Google to dereference results of a search made on the basis of their names, which provided links to certain articles criticising that group’s investment model. They say those articles contain inaccurate claims.

Google refused to comply, arguing that it was unaware whether the information contained in the articles was accurate or not.

But in a ruling Thursday, the Court of Justice of the European Union opened the door to the investment managers being able to successfully trigger the so-called “right to be forgotten” under the EU’s General Data Protection Regulation.

“The right to freedom of expression and information cannot be taken into account where, at the very least, a part – which is not of minor importance – of the information found in the referenced content proves to be inaccurate,” the court said in a press release accompanying the ruling.

People who want to scrub inaccurate results from search engines have to provide sufficient proof that what is said about them is false. But it doesn’t have to come from a court case against a publisher, for instance. They have “to provide only evidence that can reasonably be required of [them] to try to find,” the court said.

[…]

Source: Google must delete search results about you if they’re fake, EU court rules – POLITICO

Telegram is auctioning phone numbers to let users sign up to the service without any SIM

After putting unique usernames on the auction on the TON blockchain, Telegram is now putting anonymous numbers up for bidding. These numbers could be used to sign up for Telegram without needing any SIM card.

Just like the username auction, you can buy these virtual numbers on Fragment, which is a site specially created for Telegram-related auctions. To buy a number, you will have to link your TON wallet (Tonkeeper) to the website.

You can buy a random number for as low as 9 toncoins, which is equivalent to roughly $16.50 at the time of writing. Some of the premium virtual numbers — such as +888-8-888 — are selling for 31,500 toncoins (~$58,200).

Notably, you can only use this number to sign up for Telegram. You can’t use it to receive SMS or calls or use it to register for another service.

For Telegram, this is another way of asking its most loyal supporters to support the app by helping it make some money. The company launched its premium subscription plan earlier this year. On Tuesday, the chat app’s founder Pavel Durov said that Telegram has more than 1 million paid users just a few months after the launch of its premium features. While Telegram offers features like cross-device sync and large groups, it’s important to remember that chats are not protected by end-to-end encryption.

As for folks who want anonymization, Telegram already offers you to hide your phone number. Alternatively, there are tons of virtual phone number services out there — including Google Voice, Hushed, and India-based Doosra — that allow you receive calls and SMS as well.

Source: Telegram is auctioning phone numbers to let users sign up to the service without any SIM