Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies

car with eye in sky

Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident. So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor. LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act. What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car. On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.

According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month. “It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.” In recent years, insurance companies have offered incentives to people who install dongles in their cars or download smartphone apps that monitor their driving, including how much they drive, how fast they take corners, how hard they hit the brakes and whether they speed. But “drivers are historically reluctant to participate in these programs,” as Ford Motor put it in apatent application (PDF) that describes what is happening instead: Car companies are collecting information directly from internet-connected vehicles for use by the insurance industry.

Sometimes this is happening with a driver’s awareness and consent. Car companies have established relationships with insurance companies, so that if drivers want to sign up for what’s called usage-based insurance — where rates are set based on monitoring of their driving habits — it’s easy to collect that data wirelessly from their cars. But in other instances, something much sneakier has happened. Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis. Automakers and data brokers that have partnered to collect detailed driving data from millions of Americans say they have drivers’ permission to do so. But the existence of these partnerships is nearly invisible to drivers, whose consent is obtained in fine print and murky privacy policies that few read. Especially troubling is that some drivers with vehicles made by G.M. say they were tracked even when they did not turn on the feature — called OnStar Smart Driver — and that their insurance rates went up as a result.

European Commission broke data protection law with Microsoft Office 365 – duh

The European Commission has been reprimanded for infringing data protection regulations when using Microsoft 365.

The rebuke came from the European Data Protection Supervisor (EDPS) and is the culmination of an investigation that kicked off in May 2021, following the Schrems II judgement.

According to the EDPS, the EC infringed several data protection regulations, including rules around transferring personal data outside the EU / European Economic Area (EEA.)

According to the organization, “In particular, the Commission has failed to provide appropriate safeguards to ensure that personal data transferred outside the EU/EEA are afforded an essentially equivalent level of protection as guaranteed in the EU/EEA.

“Furthermore, in its contract with Microsoft, the Commission did not sufficiently specify what types of personal data are to be collected and for which explicit and specified purposes when using Microsoft 365.”

While the concerns are more about EU institutions and transparency, they should also serve as notice to any company doing business in the EU / EEA to take a very close look at how it has configured Microsoft 365 regarding the EU Data Protection Regulations.

[…]

Source: European Commission broke data protection law with Microsoft • The Register

Who knew? An American Company running an American cloud product on American Servers and the EU was putting it’s data on it. Who would have thought that might end up in America?!

Black Trump Supporters Are Being AI-Generated – Trumpistas fall for them though

Donald Trump supporters are creating and sharing AI-generated images of the former president with Black voters. The photos appear to be an attempt to inflate Trump’s popularity with the Black community, which may be irreparably harmed by his ties to white supremacist groups, but the photos are nothing but fakes.

In the leadup to the 2024 Presidential Election, several of these AI-generated dupes of Black Trump supporters have popped up on social media. One image is a holiday photo depicting Trump embracing several Black people. However, it’s an AI dupe created by The Mark Kaye Show, a conservative talk show, and distributed on Facebook to over one million of Kaye’s Facebook followers. The post from November, first reported by the BBC, was not labeled as being AI-generated in any way.

“I never thought I would read the words ‘BLM Leader endorses Donald Trump,’ but then again, Christmas is the time for miracles,” said Kaye in a Facebook post.

The image is obviously an AI fake. Trump’s hands look deformed, and the person on the far left is missing a ring finger.

[….]

Source: Black Trump Supporters Are Being AI-Generated

You don’t own what you bought: Roku Issues a Mandatory Terms of Service Update That You Must Agree To or You Can’t Use Your Roku

roku remote with dollar bills around it

Over the last 48 hours, Roku has slowly been rolling out a mandatory update to its terms of service. In this terms it changes the dispute resolution terms but it is not clear exactly why. When the new terms and conditions message shows up on a Roku Player or TV, your only option is to accept them or turn off your Roku and stop using it.

[…]

Roku does offer a way to opt out of these new arbitration rules if you write them a letter to an address listed in the terms of service. You do need to hurry though as you only get 30 days to write a letter to Roku to opt out. Though it is unclear if that is from when you buy your Roku or agree to these new terms.

Customers are understandably confused by these new terms of service that have appeared in recent days. Raising questions about why now and why such an aggressive messaging about them that forces you to manually accept them or stop using your device.

[…]

Source: Roku Issues a Mandatory Terms of Service Update That You Must Agree To or You Can’t Use Your Roku | Cord Cutters News

Biden executive order aims to stop a few countries from buying Americans’ personal data – a watered down EU GDPR

[…]

President Joe Biden will issue an executive order that aims to limit the mass-sale of Americans’ personal data to “countries of concern,” including Russia and China. The order specifically targets the bulk sale of geolocation, genomic, financial, biometric, health and other personally identifying information.

During a briefing with reporters, a senior administration official said that the sale of such data to these countries poses a national security risk. “Our current policies and laws leave open access to vast amounts of American sensitive personal data,” the official said. “Buying data through data brokers is currently legal in the United States, and that reflects a gap in our national security toolkit that we are working to fill with this program.”

Researchers and privacy advocates have long warned about the national security risks posed by the largely unregulated multibillion-dollar data broker industry. Last fall, researchers at Duke University reported that they were able to easily buy troves of personal and health data about US military personnel while posing as foreign agents.

Biden’s executive order attempts to address such scenarios. It bars data brokers and other companies from selling large troves of Americans’ personal information to countries or entities in Russia, China, Iran, North Korea, Cuba and Venezuela either directly or indirectly.

[…]

As the White House points out, there are currently few regulations for the multibillion-dollar data broker industry. The order will do nothing to slow the bulk sale of Americans’ data to countries or companies not deemed to be a security risk. “President Biden continues to urge Congress to do its part and pass comprehensive bipartisan privacy legislation, especially to protect the safety of our children,” a White House statement says.

Source: Biden executive order aims to stop Russia and China from buying Americans’ personal data

Too little, not enough, way way way too late.

Investigators seek push notification metadata in 130 cases – this is scarier than you think

More than 130 petitions seeking access to push notification metadata have been filed in US courts, according to a Washington Post investigation – a finding that underscores the lack of privacy protection available to users of mobile devices.

The poor state of mobile device privacy has provided US state and federal investigators with valuable information in criminal investigations involving suspected terrorism, child sexual abuse, drugs, and fraud – even when suspects have tried to hide their communications using encrypted messaging.

But it also means that prosecutors in states that outlaw abortion could demand such information to geolocate women at reproductive healthcare facilities. Foreign governments may also demand push notification metadata from Apple, Google, third-party push services, or app developers for their own criminal investigations or political persecutions. Concern has already surfaced that they may have done so for several years.

In December 2023, US senator Ron Wyden (D-OR) sent a letter to the Justice Department about a tip received by his office in 2022 indicating that foreign government agencies were demanding smartphone push notification records from Google and Apple.

[…]

Apple and Google operate push notification services that relay communication from third-party servers to specific applications on iOS and Android phones. App developers can encrypt these messages when they’re stored (in transit they’re protected by TLS) but the associated metadata – the app receiving the notification, the time stamp, and network details – is not encrypted.

[…]

push notification metadata is extremely valuable to marketing organizations, to app distributors like Apple and Google, and also to government organizations and law enforcement agencies.

“In 2022, one of the largest push notification companies in the world, Pushwoosh, was found to secretly be a Russian company that deceived both the CDC and US Army into installing their technology into specific government apps,” said Edwards.

“These types of scandals are the tip of the iceberg for how push notifications can be abused, and why countless serious organizations focus on them as a source of intelligence,” he explained.

“If you sign up for push notifications, and travel around to unique locations, as the messages hit your device, specific details about your device, IP address, and location are shared with app stores like Apple and Google,” Edwards added. “And the push notification companies who support these services typically have additional details about users, including email addresses and user IDs.”

Edwards continued that other identifiers may further deprive people of privacy, noting that advertising identifiers can be connected to push notification identifiers. He pointed to Pushwoosh as an example of a firm that built its push notification ID using the iOS advertising ID.

“The simplest way to think about push notifications,” he said, is “they are just like little pre-scheduled messages from marketing vendors, sent via mobile apps. The data that is required to ‘turn on any push notification service’ is quite invasive and can unexpectedly reveal/track your location/store your movement with a third-party marketing company or one of the app stores, which is merely a court order or subpoena away from potentially exposing those personal details.”

Source: Investigators seek push notification metadata in 130 cases • The Register

Also see: Governments, Apple, Google spying on users through push notifications – they all go through Apple and Google servers (unencrypted?)!

Apple reverses hissy fit decision to remove Home Screen web apps in EU

baby throwing a tantrum

Apple has reversed its decision to limit the functionality of Home Screen web apps in Europe following an outcry from the developer community and the prospect of further investigation.

“We have received requests to continue to offer support for Home Screen web apps in iOS, therefore we will continue to offer the existing Home Screen web apps capability in the EU,” the iPhone giant said in an update to its developer documentation on Friday.

“This support means Home Screen web apps continue to be built directly on WebKit and its security architecture, and align with the security and privacy model for native apps on iOS.”

Apple said Home Screen web app support would return with the general availability of iOS 17.4, presently in beta testing and due in the next few days.

[…]

In January, Apple said it would make several changes to its iOS operating system to comply with the law. These include: Allowing third-party app stores; making its NFC hardware accessible to third-party developers for contactless payment applications; and supporting third-party browser engines as alternatives to Safari’s WebKit.

Last month, with the second beta release of iOS 17.4, it became clear Apple would impose a cost for its concessions. The iCloud goliath said, “to comply with the DMA’s requirements, we had to remove the Home Screen web apps feature in the EU.”

Essentially, Apple has to support third-party browser engines in the EU, the biz didn’t want PWAs to use those non-WebKit engines, and so it chose to just banish the web apps from its Home Screen. Now it’s changed its mind and allowed the apps to stay albeit using WebKit.

For those not in the know: The Home Screen web apps feature refers to one of the capabilities afforded to Progressive Web Apps that makes them perform and appear more like native iOS apps. It allows web apps or websites to be opened from an iOS device and take over the whole screen, just like a native app, instead of loading within a browser window.

[…]

Apple’s demotion of Home Screen web apps broke settings integration, browser storage, push notifications, icon badging, share-to-PWA, app shortcuts, and device APIs.

“Cupertino’s attempt to scuttle PWAs under cover of chaos is exactly what it appears to be: a shocking attempt to keep the web from ever emerging as a true threat to the App Store and blame regulators for Apple’s own malicious choices,”

[…]

In response to Apple’s about-face, OWA credited both vocal protests from developers and the reported decision by regulators to open an investigation into Apple’s abandonment of Home Screen web app support.

[…]

“This simply returns us back to the status quo prior to Apple’s plan to sabotage web apps for the EU,” the group said. “Apple’s over-a-decade suppression of the web in favor of the App Store continues worldwide, and their attempt to destroy web apps in the EU is just their latest attempt.

“If there is to be any silver lining, it is that this has thoroughly exposed Apple’s genuine fear of a secure, open and interoperable alternative to their proprietary App Store that they can not control or tax.”

[…]

Source: Apple reverses decision to remove Home Screen web apps in EU • The Register

Apple has thrown a real tantrum about being forced to comply with the DMCA and whilst hammering hands and feet and rolling on the floor like a toddler who can’t get their way has broken a lot of stuff. Turns out they are now kind of fixing some of it.

See also: Shameless Insult, Malicious Compliance, Junk Fees, Extortion Regime: Industry Reacts To Apple’s Proposed Changes Over Digital Markets Act

HDMI Forum blocks AMD open sourcing drivers due to 2.1

stop using hdmi

As spotted by Linux benchmarking outfit Phoronix, AMD is having problems releasing certain versions of open-source drivers it’s developed for its GPUs – because, according to the Ryzen processor designer, the HDMI Forum won’t allow the code to be released as open source. Specifically, we’re talking about AMD’s FOSS drivers for HDMI 2.1 here.

For some years, AMD GPU customers running Linux have faced difficulties getting high-definition, high-refresh-rate displays connected over HMDI 2.1 to work correctly.

[,…]

The issue isn’t missing drivers: AMD has already developed them under its GPU Open initiative. As AMD developer Alex Deucher put it in two different comments on the Freedesktop.org forum:

HDMI 2.1 is not available on Linux due to the HDMI Forum.

The HDMI Forum does not currently allow an open source HDMI 2.1 implementation.

The High-Definition Multimedia Interface is not just a type of port into which to plug your monitor. It’s a whole complex specification, of which version 2.1, the latest, was published in 2017.

[…]

HDMI cables are complicated things, including copyright-enforcing measures called High-bandwidth Digital Content Protection (HDCP) – although some of those were cracked way back in 2010. As we reported when it came out, you needed new cables to get the best out of HDMI 2.1. Since then, that edition was supplemented by version 2.1b in August 2023 – so now, you may need even newer ones.

This is partly because display technology is constantly improving. 4K displays are old tech: We described compatibility issues a decade ago, and covered 4K gaming the following year.

Such high-quality video brings two consequences. On the one hand, the bandwidth the cables are expected to carry has increased substantially. On the other, some forms of copying or duplication involving a reduction in image quality – say, halving the vertical and horizontal resolution – might still result in an perfectly watchable quality copy.

[…]

As we have noted before, we prefer DisplayPort to HDMI, and one reason is that you can happily drive an HDMI monitor from a DisplayPort output using a cheap cable, or if you have an HDMI cable to hand, an inexpensive adapter. We picked a random example which is a bargain at under $5.

But the converse does not hold. You can’t drive a DisplayPort screen from an HDMI port. That needs an intelligent adaptor which can resample the image and regenerate a display. Saying that, they are getting cheaper, and for lower-quality video such as old VGA or SCART outputs, these days, a circa-$5 microcontroller board such as a Raspberry Pi Pico can do the job, and you can build your own.

Source: HDMI Forum ‘blocks AMD open sourcing its 2.1 drivers’ • The Register

Scammers Are Now Scanning Faces To Defeat Age verification Biometric Security Measures

For quite some time now we’ve been pointing out the many harms of age verification technologies, and how they’re a disaster for privacy. In particular, we’ve noted that if you have someone collecting biometric information on people, that data itself becomes a massive risk since it will be targeted.

And, remember, a year and a half ago, the Age Verification Providers Association posted a comment right here on Techdirt saying not to worry about the privacy risks, as all they wanted to do was scan everyone’s face to visit a website (perhaps making you turn to the left or right to prove “liveness”).

Anyway, now a report has come out that some Chinese hackers have been tricking people into having their faces scanned, so that the hackers can then use the resulting scan to access accounts.

Attesting to this, cybersecurity company Group-IB has discovered the first banking trojan that steals people’s faces. Unsuspecting users are tricked into giving up personal IDs and phone numbers and are prompted to perform face scans. These images are then swapped out with AI-generated deepfakes that can easily bypass security checkpoints

The method — developed by a Chinese-based hacking family — is believed to have been used in Vietnam earlier this month, when attackers lured a victim into a malicious app, tricked them into face scanning, then withdrew the equivalent of $40,000 from their bank account. 

Cool cool, nothing could possibly go wrong in now requiring more and more people to normalize the idea of scanning your face to access a website. Nothing at all.

And no, this isn’t about age verification, but still, the normalization of facial scanning is a problem, as it’s such an obvious target for scammers and hackers.

Source: As Predicted: Scammers Are Now Scanning Faces To Defeat Biometric Security Measures | Techdirt

Meta will start collecting much more “anonymized” data about Quest headset usage

Meta will soon begin “collecting anonymized data” from users of its Quest headsets, a move that could see the company aggregating information about hand, body, and eye tracking; camera information; “information about your physical environment”; and information about “the virtual reality events you attend.”

In an email sent to Quest users Monday, Meta notes that it currently collects “the data required for your Meta Quest to work properly.” Starting with the next software update, though, the company will begin collecting and aggregating “anonymized data about… device usage” from Quest users. That anonymized data will be used “for things like building better experiences and improving Meta Quest products for everyone,” the company writes.

A linked help page on data sharing clarifies that Meta can collect anonymized versions of any of the usage data included in the “Supplemental Meta Platforms Technologies Privacy Policy,” which was last updated in October. That document lists a host of personal information that Meta can collect from your headset, including:

  • “Your audio data, when your microphone preferences are enabled, to animate your avatar’s lip and face movement”
  • “Certain data” about hand, body, and eye tracking, “such as tracking quality and the amount of time it takes to detect your hands and body”
  • Fitness-related information such as the “number of calories you burned, how long you’ve been physically active, [and] your fitness goals and achievements”
  • “Information about your physical environment and its dimensions” such as “the size of walls, surfaces, and objects in your room and the distances between them and your headset”
  • “Voice interactions” used when making audio commands or dictations, including audio recordings and transcripts that might include “any background sound that happens when you use those services” (these recordings and transcriptions are deleted “immediately” in most cases, Meta writes)
  • Information about “your activity in virtual reality,” including “the virtual reality events you attend”

The anonymized collection data is used in part to “analyz[e] device performance and reliability” to “improve the hardware and software that powers your experiences with Meta VR Products.”

What does Meta know about what you're doing in VR?
Enlarge / What does Meta know about what you’re doing in VR?
Meta

Meta’s help page also lists a small subset of “additional data” that headset users can opt out of sharing with Meta. But there’s no indication that Quest users can opt out of the new anonymized data collection policies entirely.

These policies only seem to apply to users who make use of a Meta account to access their Quest headsets, and those users are also subject to Meta’s wider data-collection policies. Those who use a legacy Oculus account are subject to a separate privacy policy that describes a similar but more limited set of data-collection practices.

Not a new concern

Meta is clear that the data it collects “is anonymized so it does not identify you.” But here at Ars, we’ve long covered situations where data that was supposed to be “anonymous” was linked back to personally identifiable information about the people who generated it. The FTC is currently pursuing a case against Kochava, a data broker that links de-anonymized geolocation data to a “staggering amount of sensitive and identifying information,” according to the regulator.

Concerns about VR headset data collection dates back to when Meta’s virtual reality division was still named Oculus. Shortly after the launch of the Oculus Rift in 2016, Senator Al Franken (D-Minn.) sent an open letter to the company seeking information on “the extent to which Oculus may be collecting Americans’ personal information, including sensitive location data, and sharing that information with third parties.”

In 2020, the company then called Facebook faced controversy for requiring Oculus users to migrate to a Facebook account to continue using their headsets. That led to a temporary pause of Oculus headset sales in Germany before Meta finally offered the option to decouple its VR accounts from its social media accounts in 2022.

Source: Meta will start collecting “anonymized” data about Quest headset usage | Ars Technica

Canadian college M&M Vending machines secretly scanning faces – revealed by error message

[…]

The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, “Invenda.Vending.FacialRecognitionApp.exe,” displayed after the machine failed to launch a facial recognition application that nobody expected to be part of the process of using a vending machine.

Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).
Enlarge / Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

“Hey, so why do the stupid M&M machines have facial recognition?” SquidKid47 pondered.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS.

Stanley sounded alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines without ever requesting consent.

This frustrated Stanley, who discovered that Canada’s privacy commissioner had years ago investigated a shopping mall operator called Cadillac Fairview after discovering some of the malls’ informational kiosks were secretly “using facial recognition software on unsuspecting patrons.”

Only because of that official investigation did Canadians learn that “over 5 million nonconsenting Canadians” were scanned into Cadillac Fairview’s database, Stanley reported. Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive facial recognition data without consent for Invenda clients like Mars remain unclear.

Stanley’s report ended with a call for students to demand that the university “bar facial recognition vending machines from campus.”

A University of Waterloo spokesperson, Rebecca Elming, eventually responded, confirming to CTV News that the school had asked to disable the vending machine software until the machines could be removed.

[…]

Source: Vending machine error reveals secret face image database of college students | Ars Technica

European human rights court says backdooring encrypted comms is against human rights

a picture of an eye staring at your from your mobile phone

The European Court of Human Rights (ECHR) has ruled that laws requiring crippled encryption and extensive data retention violate the European Convention on Human Rights – a decision that may derail European data surveillance legislation known as Chat Control.

The Court issued a decision on Tuesday stating that “the contested legislation providing for the retention of all internet communications of all users, the security services’ direct access to the data stored without adequate safeguards against abuse and the requirement to decrypt encrypted communications, as applied to end-to-end encrypted communications, cannot be regarded as necessary in a democratic society.”

The “contested legislation” mentioned above refers to a legal challenge that started in 2017 after a demand from Russia’s Federal Security Service (FSB) that messaging service Telegram provide technical information to assist the decryption of a user’s communication. The plaintiff, Anton Valeryevich Podchasov, challenged the order in Russia but his claim was dismissed.

In 2019, Podchasov brought the matter to the ECHR. Russia joined the Council of Europe – an international human rights organization – in 1996 and was a member until it withdrew in March 2022 following its illegal invasion of Ukraine. Because the 2019 case predates Russia’s withdrawal, the ECHR continued to consider the matter.

The Court concluded that the Russian law requiring Telegram “to decrypt end-to-end encrypted communications risks amounting to a requirement that providers of such services weaken the encryption mechanism for all users.” As such, the Court considers that requirement disproportionate to legitimate law enforcement goals.

While the ECHR decision is unlikely to have any effect within Russia, it matters to countries in Europe that are contemplating similar decryption laws – such as Chat Control and the UK government’s Online Safety Act.

Chat Control is shorthand for European data surveillance legislation that would require internet service providers to scan digital communications for illegal content – specifically child sexual abuse material and potentially terrorism-related information. Doing so would necessarily entail weakening the encryption that keeps communication private.

Efforts to develop workable rules have been underway for several years and continue to this day, despite widespread condemnation from academics, privacy-oriented orgs, and civil society groups.

Patrick Breyer, a member of the European parliament for the Pirate Party, hailed the ruling for demonstrating that Chat Control is incompatible with EU law.

“With this outstanding landmark judgment, the ‘client-side scanning’ surveillance on all smartphones proposed by the EU Commission in its chat control bill is clearly illegal,” said Breyer.

“It would destroy the protection of everyone instead of investigating suspects. EU governments will now have no choice but to remove the destruction of secure encryption from their position on this proposal – as well as the indiscriminate surveillance of private communications of the entire population!” ®

Source: European human rights court says no to weakened encryption • The Register

23andMe Thinks ‘Mining’ Your DNA Data Is Its Last Hope

23andMe is in a death spiral. Almost everyone who wants a DNA test already bought one, a nightmare data breach ruined the company’s reputation, and 23andMe’s stock is so close to worthless it might get kicked off the Nasdaq. CEO Anne Wojcicki is on a crisis tour, promising investors the company isn’t going out of business because she has a new plan: 23andMe is going to double down on mining your DNA data and selling it to pharmaceutical companies.

“We now have the ability to mine the dataset for ourselves, as well as to partner with other groups,” Wojcicki said in an interview with Wired. “It’s a real resource that we could apply to a number of different organizations for their own drug discovery.”

That’s been part of the plan since day one, but now it looks like it’s going to happen on a much larger scale. 23andMe has always coerced its customers into giving the company consent to share their DNA for “research,” a friendlier way of saying “giving it to pharmaceutical companies.” The company enjoyed an exclusive partnership with pharmaceutical giant GlaxoSmithKline, but apparently the drug maker already sucked the value out of your DNA, and that deal is running out. Now, 23andMe is looking for new companies who want to take a look at your genes.

[…]

the most exciting opportunity for “improvements” is that 23andMe and the pharmaceutical industry get to develop new drugs. There’s a tinge of irony here. Any discoveries that 23andMe makes come from studying DNA samples that you paid the company to collect.

[…]

The problem with 23andMe’s consumer-facing business is the company sells a product you only need once in a lifetime. Worse, the appeal of a DNA test for most people is the novelty of ancestry results, but if your brother already paid for a test, you already know the answers.

[…]

it’s spent years trying to brand itself as a healthcare service, and not just a $79 permission slip to tell people you’re Irish. In fact, the company thinks you should buy yourself a recurring annual subscription to something called 23andMe+ Total Health. It only costs $1,188 a year.

[…]

The secret is you just can’t learn a ton about your health from genetic screenings, aside from tests for specific diseases that doctors rarely order unless you have a family history.

[…]

What do you get with these subscriptions? It’s kind of vague. Depending on the package, they include a service that “helps you understand how genetics and lifestyle can impact your likelihood of developing certain conditions,” testing for rare genetic conditions, enhanced ancestry features, and more. Essentially, they’ll run genetic tests that you may not need. Then, they may or may not recommend that you talk to a doctor, because they can’t offer you actual medical care.

You could also skip the middleman and start with a normal conversation with your doctor, who will order genetic tests if you need them and bill your insurance company

[…]

If 23andMe company survives, the first step is going to be deals that give more companies access to look at your genetics than ever before. But if 23andMe goes out of business, it’ll get purchased or sold off for parts, which means other companies will get a look at your data anyway.

Source: 23andMe Admits ‘Mining’ Your DNA Data Is Its Last Hope

What this piece misses is the danger of whom the data is sold to – or if it is leaked (which it was). Insurance companies may refuse to insure you. Your DNA may be faked. Your unique and unchangeable identity – and those of your family – has been stolen.

US judge dismisses authors’ ridiculous copyright claim against OpenAI

A US judge has dismissed some of the claims made by writers in a copyright infringement lawsuit against OpenAI, though gave the wordsmiths another chance to amend their complaint.

The case – Paul Tremblay et al vs OpenAI – kicked off in 2023 when novelists Paul Tremblay, Christopher Golden, and Richard Kadrey, and writer-comedian-actress Sarah Silverman accused OpenAI of illegally scraping their work without consent to train the AI champion’s large language models.

The creators claimed that ChatGPT produced accurate summaries of their books and offered that as evidence that their writing had been ripped off. Since OpenAI’s neural networks learn to generate text from its training data, the group argued that its output should be considered a “derivative work” of their IP.

The plaintiffs also alleged that OpenAI’s model deliberately omitted so-called copyright management information, or CMI – think books’ ISBN numbers and authors’ names – when it produced output based on their works. They also accused the startup of unfair competition, negligence, and unjust enrichment.

All in all, the writers are upset that, as alleged, OpenAI not only used copyrighted work without permission and recompense to train its models, its model generates prose that closely apes their own, which one might say would hinder their ability to profit from that work.

Federal district Judge Araceli Martínez-Olguín, sitting in northern California, was asked by OpenAI to dismiss the authors’ claims in August.

In a fresh order [PDF] released on Monday, Martínez-Olguín delivered the bad news for the scribes.

“Plaintiffs fail to explain what the outputs entail or allege that any particular output is substantially similar – or similar at all – to their books. Accordingly, the court dismisses the vicarious copyright infringement claim,” she wrote. She also opined that the authors couldn’t prove that CMI had been stripped from the training data or that its absence indicated an intent to hide any copyright infringement.

Claims of unlawful business practices, fraudulent conduct, negligence, and unjust enrichment were similarly dismissed.

The judge did allow a claim of unfair business practices to proceed.

“Assuming the truth of plaintiffs’ allegations – that defendants used plaintiffs’ copyrighted works to train their language models for commercial profit – the court concludes that defendants’ conduct may constitute an unfair practice,” Martínez-Olguín wrote.

Although this case against OpenAI has been narrowed, it clearly isn’t over yet. The plaintiffs have been given another opportunity to amend their initial arguments alleging violation of copyright by filing a fresh complaint before March 13.

The Register has asked OpenAI and a lawyer representing the plaintiffs for comment. We’ll let you know if they have anything worth saying. ®

Source: US judge dismisses authors’ copyright claim against OpenAI • The Register

See also: A Bunch Of Authors Sue OpenAI Claiming Copyright Infringement, Because They Don’t Understand Copyright

and: OpenAI disputes authors’ claims that every ChatGPT response is a derivative work, it’s transformative

France uncovers a vast Russian disinformation campaign in Europe

RUSSIA HAS been at the forefront of internet disinformation techniques at least since 2014, when it pioneered the use of bot farms to spread fake news about its invasion of Crimea. According to French authorities, the Kremlin is at it again. On February 12th Viginum, the French foreign-disinformation watchdog, announced it had detected preparations for a large disinformation campaign in France, Germany, Poland and other European countries, tied in part to the second anniversary of Vladimir Putin’s invasion of Ukraine and the elections to the European Parliament in June.

Viginum said it had uncovered a Russian network of 193 websites which it codenames “Portal Kombat”. Most of these sites, such as topnews.uz.ua, were created years ago and many were left dormant. Over 50 of them, such as news-odessa.ru and pravda-en.com, have been created since 2022. Current traffic to these sites, which exist in various languages including French, German, Polish and English, is low. But French authorities think they are ready to be activated aggressively as part of what one official calls a “massive” wave of Russian disinformation.

Viginum says it watched the sites between September and December 2023. It concluded that they do not themselves generate news stories, but are designed to spread “deceptive or false” content about the war in Ukraine, both on websites and via social media. The underlying objective is to undermine support for Ukraine in Europe. According to the French authorities, the network is controlled by a single Russian organisation.

[…]

For France, the detection of this latest Russian destabilisation effort comes after a series of campaigns that it has attributed to Moscow. Last November the French foreign ministry denounced a “Russian digital interference operation” that spread photos of Stars of David stencilled on walls in a neighbourhood of Paris, in order to stir intercommunal tension in France shortly after the start of the Israel-Hamas conflict. Viginum then detected a network of 1,095 bots on X (formerly Twitter), which published 2,589 posts. It linked this to a Russian internet complex called Recent Reliable News, known for cloning the websites of Western media outlets in order to spread fake news; the EU has dubbed that complex “Doppelgänger”.

France held the same network responsible in June 2023 for the cloning of various French media websites, as well as that of the French foreign ministry. On the cloned ministry website, hackers posted a statement suggesting, falsely, that France was to introduce a 1.5% “security tax” to finance military aid to Ukraine.

[…]

The EU wants to criminalize AI-generated deepfakes and the non-consensual sending of intimate images

[…] the European Council and Parliament have agreed with the proposal to criminalize, among other things, different types of cyber-violence. The proposed rules will criminalize the non-consensual sharing of intimate images, including deepfakes made by AI tools, which could help deter revenge porn. Cyber-stalking, online harassment, misogynous hate speech and “cyber-flashing,” or the sending of unsolicited nudes, will also be recognized as criminal offenses.

The commission says that having a directive for the whole European Union that specifically addresses those particular acts will help victims in Member States that haven’t criminalized them yet. “This is an urgent issue to address, given the exponential spread and dramatic impact of violence online,” it wrote in its announcement.

[…]

In its reporting, Politico suggested that the recent spread of pornographic deepfake images using Taylor Swift’s face urged EU officials to move forward with the proposal.

[…]

“The final law is also pending adoption in Council and European Parliament,” the EU Council said. According to Politico, if all goes well and the bill becomes a law soon, EU states will have until 2027 to enforce the new rules.

Source: The EU wants to criminalize AI-generated porn images and deepfakes

The original article has a seriously misleading title, I guess for clickbait.

Criticism as Dutch domain registry plans move to Amazon cloud

Questions are being asked in parliament about the decision by Dutch domain registration foundation SIDN to transfer the dot nl domain and its “complete ICT services” to Amazon’s cloud services. 

SIDN says the move will make managing the technology easier but some tech entrepreneurs have doubts, and now MPs have asked the government, which supports the idea of keeping .nl on Dutch or European servers, to explain why the move has been sanctioned. 

Tech entrepreneur Bert Hubert told BNR radio he opposes the idea of shifting the domain to cloud operators in the US. “If your servers are on your own continent and under your legal surveillance, then you can also be sure that no one will mess with your data,” he said. 

The added value of keeping .nl domain names under Dutch control also means “we control it ourselves and can innovate with it ourselves… When you outsource, you always lose your knowledge,” he said. 

Simon Besteman, managing director of the Dutch Cloud Community said on social media he was shocked by SIDN’s decision. “We have been inundated with questions from the Dutch internet community and our members… who have questions about the ethical as well as compliance and moral aspects.”

SIDN says that all data will remain on European servers and that users will not notice any difference in practice. It also argues that Amazon has the extremely specialised services it needs, and that these are not available in Europe.  

It was a difficult decision to move the systems to Amazon, SIDN technology chief Loek Bakker said in a reaction to the criticism.

“Although we seek to contribute to the strategic digital autonomy of the Netherlands and Europe in numerous ways, the need to assure the permanent availability of .nl and the protection of our data was decisive in this instance. That is, after all, our primary responsibility as a registry.”

Nevertheless, he said “We will be using generic, open-source technology, so that, as soon as it becomes responsible to migrate the system to a Dutch or European cloud service provider, we can do so relatively easily.”

You can smell the nonsense here very clearly – SIDN was and should be a  highly technical company. Apparently the bean counters have taken over and kicked out all the expertise in the name of… cost cutting? Are they aware that the costs of AWS are often higher than the costs of self maintenance? But the manager gets a nice trip to the US in a private jet or something like it?

And nothing about AWS is open source – they are in fact known for taking open source projects and then forking them and then pricing them through the nose.

MPs from GroenLinks, the PvdA and D66 have now asked the government to explain why the move is being made, Hubert said.

SIDN is a foundation that has the right to exploit the .nl domain name, earning some €21 million a year in the process. More than six million .nl domains have been registered. 

Source: Criticism as Dutch domain registry plans move to Amazon cloud – DutchNews.nl

Cory Doctorow’s McLuhan lecture on enshittification (30 Jan 2024)

Last year, I coined the term ‘enshittification,’ to describe the way that platforms decay. That obscene little word did big numbers, it really hit the zeitgeist. I mean, the American Dialect Society made it their Word of the Year for 2023 (which, I suppose, means that now I’m definitely getting a poop emoji on my tombstone).

So what’s enshittification and why did it catch fire? It’s my theory explaining how the internet was colonized by platforms, and why all those platforms are degrading so quickly and thoroughly, and why it matters – and what we can do about it.

We’re all living through the enshittocene, a great enshittening, in which the services that matter to us, that we rely on, are turning into giant piles of shit.

It’s frustrating. It’s demoralizing. It’s even terrifying.

I think that the enshittification framework goes a long way to explaining it, moving us out of the mysterious realm of the ‘great forces of history,’ and into the material world of specific decisions made by named people – decisions we can reverse and people whose addresses and pitchfork sizes we can learn.

Enshittification names the problem and proposes a solution. It’s not just a way to say ‘things are getting worse’ (though of course, it’s fine with me if you want to use it that way. It’s an English word. We don’t have der Rat für Englisch Rechtschreibung. English is a free for all. Go nuts, meine Kerle).

[…]

Source: Pluralistic: My McLuhan lecture on enshittification (30 Jan 2024) – Pluralistic: Daily links from Cory Doctorow

It’s a good essay on what enshittification is, what causes it, why it’s so bad and some ideas on how to get rid of it. Very worth reading.

Hundreds of thousands of EU citizens ‘wrongly fined for driving in London Ulez’ in one of EUs largest privacy breaches

Hundreds of thousands of EU citizens were wrongly fined for driving in London’s Ulez clean air zone, according to European governments, in what has been described as “possibly one of the largest data breaches in EU history”.

The Guardian can reveal Transport for London (TfL) has been accused by five EU countries of illegally obtaining the names and addresses of their citizens in order to issue the fines, with more than 320,000 penalties, some totalling thousands of euros, sent out since 2021.

[…]

Since Brexit, the UK has been banned from automatic access to personal details of EU residents. Transport authorities in Belgium, Spain, Germany and the Netherlands have confirmed to the Guardian that driver data cannot be shared with the UK for enforcement of London’s ultra-low emission zone (Ulez), and claim registered keeper details were obtained illegally by agents acting for TfL’s contractor Euro Parking Collection.

In France, more than 100 drivers have launched a lawsuit claiming their details were obtained fraudulently, while Dutch lorry drivers are taking legal action against TfL over £6.5m of fines they claim were issued unlawfully.

According to the Belgian MP Michael Freilich, who has investigated the issue on behalf of his constituents, TfL is treating European drivers as a “cash cow” by using data obtained illegitimately to issue unjustifiable fines.

Many of the penalties have been issued to drivers who visited London in Ulez-compliant vehicles and were not aware they had to be registered with TfL’s collections agent Euro Parking at least 10 days before their visit.

Failure to register does not count as a contravention, according to Ulez rules, but some drivers have nonetheless received penalties of up to five-figure sums.

[…]

Some low-emission cars have been misclassed as heavy goods diesel vehicles and fined under the separate low-emission zone (Lez) scheme, which incurs penalties of up to £2,000 a day. Hundreds of drivers have complained that the fines arrived weeks after the early payment discount and appeals deadlines had passed.

One French driver was fined £25,000 for allegedly contravening Lez and Ulez rules, despite the fact his minibus was exempt.

[…]

EU countries say national laws allow the UK to access personal data only for criminal offences, not civil ones. Breaching Ulez rules is a civil offence, while more risky behaviour such as speeding or driving under the influence of drink or drugs can be a criminal offence. This raises the question of whether Euro Parking can legally carry out its contract with TfL.

Euro Parking was awarded a five-year contract by TfL in 2020 to recover debts from foreign drivers who had breached congestion or emission zone rules.

The company, which is paid according to its performance, is estimated to have earned between £5m and £10m. It has the option to renew for a further five years.

The firm is owned by the US transport technology group Verra Mobility, which is listed on the Nasdaq stock exchange and headed by the former Bank of America Merrill Lynch executive David Roberts. The company’s net revenue was $205m (£161m) in the second quarter of 2023.

In October, the Belgian government ordered a criminal investigation after a court bailiff was accused of illegally passing the details of 20,000 drivers to Euro Parking for Ulez enforcement. The bailiff was suspended in 2022 and TfL initially claimed that no Belgian data had been shared with Euro Parking since then. However, a freedom of information request by the Guardian found that more than 17,400 fines had been issued to Belgians in the intervening 19 months.

[…]

Campaigners accuse Euro Parking of circumventing data protection rules by using EU-based agents to request driver data without disclosing that it is for UK enforcement.

Last year, an investigation by the Dutch vehicle licensing authority RDW found that the personal details of 55,000 citizens had been obtained via an NCP in Italy. “The NCP informed us that the authorised users have used the data in an unlawful way and stopped their access,” a spokesperson said.

The German transport authority KBA claimed that an Italian NCP was used to obtain information from its database. “Euro Parking obtained the data through unlawful use of an EU directive to facilitate the cross-border exchange of information about traffic offences that endanger road safety,” a KBA spokesperson said. “The directive does not include breaches of environmental rules.”

Spain’s transport department told the Guardian that UK authorities were not allowed access to driver details for Ulez enforcement. Euro Parking has sent more than 25,600 fines to Spanish drivers since 2021.

In France, 102 drivers have launched a lawsuit claiming that their details were fraudulently obtained

[…]

Source: Hundreds of thousands of EU citizens ‘wrongly fined for driving in London Ulez’ | TfL | The Guardian

I guess Brexit has panned out economically much worse than we thought

Palworld Is a Great Example Of The Idea/Expression Dichotomy | Techdirt

When it comes to copyright suits or conflicts that never should have existed, one of the most common misunderstandings that births them is not understanding the idea/expression dichotomy in copyright law. Even to most laypeople, once you explain it, it’s quite simple. You can copyright a specific expression of something, such as literature, recorded music, etc., but you cannot copyright a general idea. So, while Superman may be subject to copyright protections as a character and in depictions of that character, you cannot copyright a superhero that flies, wears a cape, shoots beams from his eyes, and has super strength. For evidence of that, see: Homelander from The Boys.

But while Homelander is a good case study in the protections offered by the idea/expression dichotomy, a more perfect one might be the recently released PC game Palworld, which has often been described as “Pokémon, but with guns.” This thing is a megahit already, hitting Early Access mid-January and also already hitting 1 million concurrent players. And if you’re wondering just how “Pokémon, but with guns” this game is, well…

The art styles are similar, it’s essentially a monster-collecting game involving battles, etc. and so on. You get it. And this has led to a whole lot of speculation out there that all of this somehow constitutes copyright infringement, or plagiarism, on the part of publisher PocketPair. There is likewise speculation that it’s only a matter of time before Nintendo, Game Freak, or The Pokémon Co. sues the hell out of PocketPair over all of this.

And that may still happen — the Pokemon company says it’s investigating Palworld. All of those companies have shown themselves to be voracious IP enforcers, after all. But the fact is that there is nothing in this game that is a direct copy of any expression owned by any of those entities. To that end, when asked about any concerns over lawsuits, PocketPair is taking a very confident posture.

On the other hand, we had a chance to talk to PocketPair’s CEO Takuro Mizobe before Palworld’s release, and addressing this topic, Mizobe mentioned that Palworld has cleared legal reviews, and that there has been no action taken against it by other companies. Mizobe shared PocketPair’s stance on the issue, stating, “We make our games very seriously, and we have absolutely no intention of infringing upon the intellectual property of other companies.” 

Mizobe has also commented that, in his personal opinion, Palworld is not at all that similar to Pokémon, even citing other IPs that Palworld more closely resembles. (Related article) He encouraged users to see past the rumors and give Palworld a chance.  

And he’s right. The game mechanics themselves go far beyond anything Pokémon has on offer. And while we can certainly say that even some of the Pals themselves look as though they were inspired by some well-known Pokémon, there are more than enough differences in sum-total to make any claim that this is some kind of direct ripoff simply untrue. Some of the ideas are very, very similar. The expression, however, is different.

In addition to the legal review that Mizobe mentioned, it’s not like the game as a concept has been kept a secret, either.

Though it released just a few days ago, Palworld’s concept and content has been open to the public for quite a while, and were even presented at the Tokyo Game Show in both 2022 and 2023. Many users are of the opinion that, if there were basis for plagiarism-related legal action, the relevant parties would have already acted by now. 

I would normally agree, but in this case, well, it’s Pokémon and Nintendo, so who knows. Maybe legal action is coming, maybe not. If it does come, however, it should fail. And fail miserably. All because of the idea/expression dichotomy.

Source: Palworld Is a Great Example Of The Idea/Expression Dichotomy | Techdirt

It’s quite fortunate that Palworld has sold millions of copies quickly, because that means they should have the funds to withstand a legal onslaught from Nintendo. In justice it’s not often if you are right, but if you are rich.

iPhone Apps Secretly Harvest Data When They Send You Notifications, Researchers Find

iPhone apps including Facebook, LinkedIn, TikTok, and X/Twitter are skirting Apple’s privacy rules to collect user data through notifications, according to tests by security researchers at Mysk Inc., an app development company. Users sometimes close apps to stop them from collecting data in the background, but this technique gets around that protection. The data is unnecessary for processing notifications, the researchers said, and seems related to analytics, advertising, and tracking users across different apps and devices.

It’s par for the course that apps would find opportunities to sneak in more data collection, but “we were surprised to learn that this practice is widely used,” said Tommy Mysk, who conducted the tests along with Talal Haj Bakry. “Who would have known that an innocuous action as simple as dismissing a notification would trigger sending a lot of unique device information to remote servers? It is worrying when you think about the fact that developers can do that on-demand.”

These particular apps aren’t unusual bad actors. According to the researchers, it’s a widespread problem plaguing the iPhone ecosystem.

This isn’t the first time Mysk’s tests have uncovered data problems at Apple, which has spent untold millions convincing the world that “what happens on your iPhone, stays on your iPhone.” In October 2023, Mysk found that a lauded iPhone feature meant to protect details about your WiFi address isn’t as private as the company promises. In 2022, Apple was hit with over a dozen class action lawsuits after Gizmodo reported on Mysk’s finding that Apple collects data about its users even after they flip the switch on an iPhone privacy setting that promises to “disable the sharing of device analytics altogether.”

The data looks like information that’s used for “fingerprinting,” a technique companies use to identify you based on several seemingly innocuous details about your device. Fingerprinting circumvents privacy protections to track people and send them targeted ads

[…]

For example, the tests showed that when you interact with a notification from Facebook, the app collects IP addresses, the number of milliseconds since your phone was restarted, the amount of free memory space on your phone, and a host of other details. Combining data like these is enough to identify a person with a high level of accuracy. The other apps in the test collected similar information. LinkedIn, for example, uses notifications to gather which timezone you’re in, your display brightness, and what mobile carrier you’re using, as well as a host of other information that seems specifically related to advertising campaigns, Mysk said.

[…]

Apps can collect this kind of data about you when they’re open, but swiping an app closed is supposed to cut off the flow of data and stop an app from running whatsoever. However, it seems notifications provide a backdoor.

Apple provides special software to help your apps send notifications. For some notifications, the app might need to play a sound or download text, images, or other information. If the app is closed, the iPhone operating system lets the app wake up temporarily to contact company servers, send you the notification, and perform any other necessary business. The data harvesting Mysk spotted happened during this brief window.

[…]

Source: iPhone Apps Secretly Harvest Data When They Send You Notifications, Researchers Find

France fines Amazon $35 million over intrusive employee surveillance

France’s data privacy watchdog organization, the CNIL, has fined a logistics subsidiary of Amazon €32 million, or $35 million in US dollars, over the company’s use of an “overly intrusive” employee surveillance system. The CNIL says that the system employed by Amazon France Logistique “measured work interruptions with such accuracy, potentially requiring employees to justify every break or interruption.”

Of course, this system was forced on the company’s warehouse workers, as they seem to always get the short end of the Amazon stick. The CNIL says the surveillance software tracked the inactivity of employees via a mandatory barcode scanner that’s used to process orders. The system tracks idle time as interruptions in barcode scans, calling out employees for periods of downtime as low as one minute. The French organization ruled that the accuracy of this system was illegal, using Europe’s General Data Protection Regulation (GDPR) as a legal basis for the ruling.

To that end, this isn’t being classified as a labor case, but rather a data processing case regarding excessive monitoring. “As implemented, the processing is considered to be excessively intrusive,” the CNIL wrote, noting that Amazon uses this data to assess employee performance on a weekly basis. The organization also noted that Amazon held onto this data for all employees and temporary workers.

[…]

Source: France fines Amazon $35 million over ‘intrusive’ employee surveillance

Ubisoft Says It Out Loud: We Want People To Get Used To Not Owning What They’ve Bought

if buying isnt owning then piracy isnt stealing

[…] the public too often doesn’t understand how it happens that products stop working the way they did after updates are performed remotely, or why movies purchased through an online store suddenly disappear with no refund, or why other media types purchased online likewise go poof. There is a severe misalignment, in other words, between what consumers think their money is being spent on and what is actually being purchased.

[…]

I suppose it’s at least a bit refreshing to see Ubisoft come out here and just say the quiet part out loud.

With the pre-release of Prince of Persia: The Lost Crown started, Ubisoft has chosen this week to rebrand its Ubisoft+ subscription services, and introduce a PC version of the “Classics” tier at a lower price. And a big part of this, says the publisher’s director of subscriptions, Philippe Tremblay, is getting players “comfortable” with not owning their games.

He claims the company’s subscription service had its biggest ever month October 2023, and that the service has had “millions” of subscribers, and “over half a billion hours” played. Of course, a lot of this could be a result of Ubisoft’s various moments of refusing to release games to Steam, forcing PC players to use its services, and likely opting for a month’s subscription rather than the full price of the game they were looking to buy. But still, clearly people are opting to use it.

On the one hand, there are realms where it makes sense for a subscription based gaming service where you pay a monthly fee for access and essentially never buy a game. Xbox’s Game Pass, for instance, makes all the sense in the world for some people. If you’re a more casual gamer who doesn’t want to own a library of games, but rather merely wants to be able to play a broad swath of titles at a moment’s notice, a service like that is perfect.

But Game Pass is $10 a month and includes titles from all kinds of publishers. Ubisoft’s service is nearly double that rate and only includes Ubisoft titles. That’s a much tougher sell.

[…]

Given that most people, while being a part of the problem (hello), also think of this as a problem, it’s so weird to see it phrased as if some faulty thinking in the company’s audience.

One of the things we saw is that gamers are used to, a little bit like DVD, having and owning their games. That’s the consumer shift that needs to happen. They got comfortable not owning their CD collection or DVD collection. That’s a transformation that’s been a bit slower to happen [in games]. As gamers grow comfortable in that aspect… you don’t lose your progress. If you resume your game at another time, your progress file is still there. That’s not been deleted. You don’t lose what you’ve built in the game or your engagement with the game. So it’s about feeling comfortable with not owning your game.

That last sentence’s thoughts are so misaligned as to be nearly in the realm of nonsense. If it’s my game, then I do own it. The point Ubisoft is trying to make is that the public should get over ownership entirely and accept that it’s not my game at all. It’s my subscription service.

And while I appreciate Ubisoft saying the quiet part out loud for once, I don’t believe for a moment that this will go over well with the general gaming public.

Source: Ubisoft Says It Out Loud: We Want People To Get Used To Not Owning What They’ve Bought | Techdirt

Hint: it hasn’t!

Amazon wants you to pay to give them your data with Its Next-Gen “Remarkable Alexa” – which is remarkable in how poorly it works

amazon alexa echo device covered in green goo

Amazon is revamping its Alexa voice assistant as it prepares to launch a new paid subscription plan this year, according to internal documents and people familiar with the matter. But the change is causing internal conflict and may lead to further delay.

Tentatively named “Alexa Plus,” the paid version of Alexa is intended to offer more conversational and personalized artificial-intelligence technology, one of the documents obtained by Business Insider says. The people said the team was working toward a June 30 launch deadline and had been testing the underlying voice technology, dubbed “Remarkable Alexa,” with 15,000 external customers.

But the quality of the new Alexa’s answers is still falling short of expectations, often sharing inaccurate information, external tests have found. Amazon is now going through a major overhaul of Alexa’s technology stack to address this issue, though the team is experiencing some discord.

[…]

The people familiar with the matter said the limited preview with 15,000 external customers discovered that, while Remarkable Alexa was generally good at being conversational and informative, it was still deflecting answers, often giving unnecessarily long or inaccurate responses. It also needed to improve its ability to answer ambiguous customer requests that require the engagement of multiple services, such as turning on the light and music at the same time.

The new Alexa still didn’t meet the quality standards expected for Alexa Plus, these people added

[…]

Source: Amazon Is Struggling With Its Next-Gen “Remarkable Alexa’

HP CEO: You’re ‘bad investment’ if you don’t buy HP supplies

hp printers printing money over your dead body

HP CEO Enrique Lores admitted this week that the company’s long-term objective is “to make printing a subscription” when he was questioned about the company’s approach to third-party replacement ink suppliers.

The PC and print biz is currently facing a class-action lawsuit (from 2.42 in the video below) regarding allegations that the company deliberately prevented its hardware from accepting non-HP branded replacement cartridges via a firmware update.

When asked about the case in a CNBC interview, Lores said: “I think for us it is important for us to protect our IP. There is a lot of IP that we’ve built in the inks of the printers, in the printers themselves. And what we are doing is when we identify cartridges that are violating our IP, we stop the printers from work[ing].”

Later in the interview, he added: “Every time a customer buys a printer, it’s an investment for us. We are investing in that customer, and if that customer doesn’t print enough or doesn’t use our supplies, it’s a bad investment.”

[…]

HP has long banged the drum [PDF] about the potential for malware to be introduced via print cartridges, and in 2022, its bug bounty program confirmed that third-party cartridges with reprogrammable chips could deliver malware into printers.

Kind old HP is, therefore, only concerned about the welfare of customers.

Sadly, Lores’s protestations were somewhat undermined by the admission that the company’s business model depends – at least in part – on customers selecting HP supplies for their devices.

“Our objective is to make printing as easy as possible, and our long-term objective is to make printing a subscription.”

This echoes comments by former CFO Marie Myers, who said in December:

“We absolutely see when you move a customer from that pure transactional model … whether it’s Instant Ink, plus adding on that paper, we sort of see a 20 percent uplift on the value of that customer because you’re locking that person, committing to a longer-term relationship.”

Source: HP CEO: You’re ‘bad investment’ if you don’t buy HP supplies • The Register