Telehealth startup Cerebral shared millions of patients’ data with advertisers since 2019

Cerebral has revealed it shared the private health information, including mental health assessments, of more than 3.1 million patients in the United States with advertisers and social media giants like Facebook, Google and TikTok.

The telehealth startup, which exploded in popularity during the COVID-19 pandemic after rolling lockdowns and a surge in online-only virtual health services, disclosed the security lapse [This is no security lapse! This is blatant greed served by peddling people’s personal information!] in a filing with the federal government that it shared patients’ personal and health information who used the app to search for therapy or other mental health care services.

Cerebral said that it collected and shared names, phone numbers, email addresses, dates of birth, IP addresses and other demographics, as well as data collected from Cerebral’s online mental health self-assessment, which may have also included the services that the patient selected, assessment responses and other associated health information.

The full disclosure follows:

If an individual created a Cerebral account, the information disclosed may have included name, phone number, email address, date of birth, IP address, Cerebral client ID number, and other demographic or information. If, in addition to creating a Cerebral account, an individual also completed any portion of Cerebral’s online mental health self-assessment, the information disclosed may also have included the service the individual selected, assessment responses, and certain associated health information.

If, in addition to creating a Cerebral account and completing Cerebral’s online mental health self-assessment, an individual also purchased a subscription plan from Cerebral, the information disclosed may also have included subscription plan type, appointment dates and other booking information, treatment, and other clinical information, health insurance/pharmacy benefit information (for example, plan name and group/member numbers), and insurance co-pay amount.

Cerebral was sharing patients’ data with tech giants in real-time by way of trackers and other data-collecting code that the startup embedded within its apps. Tech companies and advertisers, like Google, Facebook and TikTok, allow developers to include snippets of their custom-built code, which allows the developers to share information about their app users’ activity with the tech giants, often under the guise of analytics but also for advertising.

But users often have no idea that they are opting-in to this tracking simply by accepting the app’s terms of use and privacy policies, which many people don’t read.

Cerebral said in its notice to customers — buried at the bottom of its website — that the data collection and sharing has been going on since October 2019 when the startup was founded. The startup said it has removed the tracking code from its apps. While not mentioned, the tech giants are under no obligations to delete the data that Cerebral shared with them.

Because of how Cerebral handles confidential patient data, it’s covered under the U.S. health privacy law known as HIPAA. According to a list of health-related security lapses under investigation by the U.S. Department of Health and Human Services, which oversees and enforces HIPAA, Cerebral’s data lapse is the second-largest breach of health data in 2023.

News of Cerebral’s years-long data lapse comes just weeks after the U.S. Federal Trade Commission slapped GoodRx with a $1.5 million fine and ordered it to stop sharing patients’ health data with advertisers, and BetterHelp was ordered to pay customers $8.5 million for mishandling users’ data.

If you were wondering why startups today should terrify you, Cerebral is just the latest example.

Source: Telehealth startup Cerebral shared millions of patients’ data with advertisers | TechCrunch

 

Holy shit: German Courts saying DNS Service (Quad9) Is Implicated In Any Copyright Infringement At The Domains It Resolves

Back in September 2021 Techdirt covered an outrageous legal attack by Sony Music on Quad9, a free, recursive, anycast DNS platform. Quad9 is part of the Internet’s plumbing: it converts domain names to numerical IP addresses. It is operated by the Quad9 Foundation, a Swiss public-benefit, not-for-profit organization. Sony Music says that Quad9 is implicated in alleged copyright infringement on the sites it resolves. That’s clearly ridiculous, but unfortunately the Regional Court of Hamburg agreed with Sony Music’s argument, and issued an interim injunction against Quad9. The German Society for Civil Rights (Gesellschaft für Freiheitsrechte e.V. or “GFF”) summarizes the court’s thinking:

In its interim injunction the Regional Court of Hamburg asserts a claim against Quad9 based on the principles of the German legal concept of “Stoererhaftung” (interferer liability), on the grounds that Quad9 makes a contribution to a copyright infringement that gives rise to liability, in that Quad9 resolves the domain name of website A into the associated IP address. The German interferer liability has been criticized for years because of its excessive application to Internet cases. German lawmakers explicitly abolished interferer liability for access providers with the 2017 amendment to the German Telemedia Act (TMG), primarily to protect WIFI operators from being held liable for costs as interferers.

As that indicates, this is a case of a law that is a poor fit for modern technology. Just as the liability no longer applies to WIFI operators, who are simply providing Internet access, so the German law should also not catch DNS resolvers like Quad9. The GFF post notes that Quad9 has appealed to the Hamburg Higher Regional Court against the lower court’s decision. Unfortunately, another regional court has just handed down a similar ruling against the company, reported here by Heise Online (translation by DeepL):

the Leipzig Regional Court has sentenced the Zurich-based DNS service Quad9. On pain of an administrative fine of up to 250,000 euros or up to 2 years’ imprisonment, the small resolver operator was prohibited from translating two related domains into the corresponding IP addresses. Via these domains, users can find the tracks of a Sony music album offered via Shareplace.org.

The GFF has already announced that it will be appealing along with Quad9 to the Dresden Higher Regional Court against this new ruling. It says that the Leipzig Regional Court has made “a glaring error of judgment”, and explains:

If one follows this reasoning, the copyright liability of completely neutral infrastructure services like Quad9 would be even stricter than that of social networks, which fall under the infamous Article 17 of the EU Copyright Directive,” criticizes Felix Reda, head of the Control © project of the Society for Civil Rights. “The [EU] Digital Services Act makes it unequivocally clear that the liability rules for Internet access providers apply to DNS services. We are confident that this misinterpretation of European and German legal principles will be overturned by the Court of Appeals.”

Let’s hope so. If it isn’t, we can expect companies providing the Internet’s basic infrastructure in the EU to be bombarded with demands from the copyright industry and others for domains to be excluded from DNS resolution. The likely result is that perfectly legal sites and their holdings will be ghosted by DNS companies, which will prefer to err on the side of caution rather than risk becoming the next Quad9.

Source: Another German Court Says The DNS Service Quad9 Is Implicated In Any Copyright Infringement At The Domains It Resolves | Techdirt

There are some incredibly stupid judges and lawyers out there

YouTube Chills the Darned Hell Out On Its Cursing Policy, but you still can’t fucking say fuck

Google’s finally rolling back its unpopular decree against any kinds of profanity in videos, making it harder for any creators used to offering colorful sailor’s speech in videos from monetizing content on behalf of its beloved ad partners. The only thing is, Google still seems to think the “f-word” is excessively harsh language, so sorry Samuel L. Jackson, those motha-[redacted] snakes are still liable for less ad dollars on this motha-[redacted] plane.

On Tuesday, Google updated its support page to offer up an olive branch to crass creators upset that their potty mouths were resulting in their videos being demonetized. Now, the company clarified that use of “moderate” profanity at any time in a video is now eligible for ad revenue.

However, the company seemed to be antagonistic to “stronger profanity” like “the f-word,” AKA “fuck.” You can’t say “fuck” in the first seven seconds or repeatedly throughout a video or else you will receive “limited ads.” Putting words like “fuck” into a title or thumbnail will result in no ad content.

What is allowed are words like “hell” or “damn” in a title or thumbnail. Words like “bitch,” “douchebag,” “asshole,” and “shit” are considered “moderate profanity, so that’s fine to use frequently in a video. But “fuck,” dear god, will hurt advertiser’s poor virgin ears. YouTube has been extremely sensitive to what its advertisers are saying. For instance the platform came close to pulling big money-making ads over creepy pasta content during the “Elsagate” scandal.

The changes also impacted videos which used music tracks in the background. YouTube is now saying any use of “moderate” or “strong” profanity in background music is eligible for full ad revenue.

Back in November, YouTube changed its creator monetization policy, calling it guidelines for “advertiser-friendly content.” The company decreed that any video with a thumbnail or title containing obscene language or “adult material” wouldn’t receive any ad revenue. YouTube also said it would demonetize violent content such as dead bodies without context, or virtual violence directed at a “real, named person.” Fair enough, but then YouTube said it would demonetize any video which used profanity “in the first eight seconds of the video.”

[…]

Source: YouTube Chills the Hell Out On Its Cursing Policy

What the shitting fuck, Google. Americans. I thought it was the land of the free, once?

When Given The Choice, Most Authors Reject Excessively Long Copyright Terms

Recently, Walled Culture mentioned the problem of orphan works. These are creations, typically books, that are still covered by copyright, but unavailable because the original publisher or distributor has gone out of business, or simply isn’t interested in keeping them in circulation. The problem is that without any obvious point of contact, it’s not possible to ask permission to re-publish or re-use it in some way.

It turns out that there is another serious issue, related to that of orphan works. It has been revealed by the New York Public Library, drawing on work carried out as a collaboration between the Internet Archive and the US Copyright Office. According to a report on the Vice Web site:

the New York Public Library (NYPL) has been reviewing the U.S. Copyright Office’s official registration and renewals records for creative works whose copyrights haven’t been renewed, and have thus been overlooked as part of the public domain.

The books in question were published between 1923 and 1964, before changes to U.S. copyright law removed the requirement for rights holders to renew their copyrights. According to Greg Cram, associate general counsel and director of information policy at NYPL, an initial overview of books published in that period shows that around 65 to 75 percent of rights holders opted not to renew their copyrights.

Since most people today will naturally assume that a book published between 1923 and 1964 is still in copyright, it is unlikely anyone has ever tried to re-publish or re-use material from this period. But this new research shows that the majority of these works are, in fact, already in the public domain, and therefore freely available for anyone to use as they wish.

That’s a good demonstration of how the dead hand of copyright stifles fresh creativity from today’s writers, artists, musicians and film-makers. They might have drawn on all these works as a stimulus for their own creativity, but held back because they have been brainwashed by the copyright industry into thinking that everything is in copyright for inordinate lengths of time. As a result, huge numbers of books that are freely available according to the law remain locked up with a kind of phantom copyright that exists only in people’s minds, infected as they are with copyright maximalist propaganda.

The other important lesson to be drawn from this work by the NYPL is that given the choice, the majority of authors didn’t bother renewing their copyrights, presumably because they didn’t feel they needed to. That makes today’s automatic imposition of exaggeratedly-long copyright terms not just unnecessary but also harmful in terms of the potential new works, based on public domain materials, that have been lost as a result of this continuing over-protection.

Source: When Given The Choice, Most Authors Reject Excessively Long Copyright Terms | Techdirt

Texas Bill Would Make ISPs censor any abortion information

Last week, Texas introduced a bill that would make it illegal for internet service providers to let users access information about how to get abortion pills. The bill, called the Women and Child Safety Act, would also criminalize creating, editing, or hosting a website that helps people seek abortions.

If the bill passes, internet service providers (ISPs) will be forced to block websites “operated by or on behalf of an abortion provider or abortion fund.” ISPs would also have to filter any website that helps people who “provide or aid or abet elective abortions” in almost any way, including raising money.

[…]

Five years ago, a bill like this would violate federal law. Remember Net Neutrality? Net Neutrality forced ISPs to act like phone companies, treating all traffic the same with almost no ability to limit or filter the content traveling on their networks. But Net Neutrality was repealed in 2018, essentially reclassifying internet service as a luxury with little regulator oversight, and upending consumers’ right to free access of the web.

[…]

Source: Texas Bill Would Bar ISPs From Hosting Abortion Websites, Info

JPMorgan Chase ‘requires workers give 6 months notice’

A veteran JPMorgan Chase banker fumed over the financial giant’s policy requiring certain staffers to give six months’ notice before being allowed to leave for another job.

The Wall Street worker, who claims to earn around $400,000 annually in total compensation after accumulating 15 years of experience, griped that the lengthy notice period likely means a lucrative job offer from another company will be rescinded.

[…]

“When I looked into the resignation process, I see that my notice period is 6 bloody months!!”

“I was in disbelief, I checked my offer letter and ‘Whoops there it is,’” the post continued.

[…]

A spokesperson for JPMorgan Chase told The Post: “In line with other e-trading organizations, some of our algo trading technology employees have an extended notice period. This affects a very small portion – less than 100 – of our 57,000 technologists.”

[…]

Workers at its India corporate offices said last year that the Wall Street giant was raising its notice period from 30 days for vice president and below to 60 days, according to eFinancialCareer.com.

Meanwhile, bankers at the executive director level saw their notice period bumped up to 90 days.

Source: JPMorgan Chase ‘requires workers give 6 months notice’

On the other side, I’m betting that JPMorgan Chase can just fire you with 0 days notice period.

Guy Embezzles Cool $9 Million From Poop-to-Energy Ponzi Scheme

Stop me if you’ve heard this one before: A guy embezzled nearly $9 million by convincing investors he was turning cow poop into green energy—and then not building any of the machines at all.

On Monday, 66-year-old Raymond Brewer of Porterville, California pled guilty to charges that he’d defrauded investors. Court records show that Brewer stole $8,750,000 from investors between 2014 and 2019 with promises to build anaerobic digesters, or machines that can convert cow manure to methane gas that can then be sold as energy, on dairies in various counties in California and Idaho. But instead of actually building any of those digesters, Brewer spent it on stuff like a new house and new Dodge Ram pickup trucks.

According to the U.S. Attorney’s Office of the Eastern District of California, Brewer was a prolific scammer. He took potential investors on tours of dairies where he said he was going to build the digesters and sent faked documents where he’d signed agreements with those dairies. When investors asked how things were going or for updates on the construction of the digesters or how the digesters were running, Brewer sent over “fake construction schedules, fake invoices for project-related costs, fake power generation reports, fake RECs, and fake pictures,” as well as forged contracts with banks and fake international investors. He must have been great at Photoshop!

Part of the appeal of the scam was in what’s known as Renewable Energy Credits (REC), which are credits issued by the federal government signifying that renewable energy has been produced on a site; those credits can then be sold to companies looking to offset their fossil fuel emissions. Brewer told his investors that he’d get them 66% of all the profits from those credits.

Five years is a hell of a long time to promise folks money and not deliver—which is why the U.S. Attorney’s office has described Brewer’s setup as a “Ponzi” scheme, because he began repaying old investors with money he was scamming off of new ones. When investors began to get suspicious, the U.S. Attorneys’ office said, Brewer moved to Montana and assumed a new identity. He was finally arrested in 2020.

Some profiles for Brewer’s company, CH4 Energy, are still active on business directories like PitchBook and food waste resource site ReFED. The company was even the subject of a profile on its “work” in local paper Visalia Times-Delta in 2016 and was part of a story in the LA Times in 2013 on dairy farmers and renewable energy.

In the LA Times story, Brewer is quoted as talking about the reluctance of dairy farmers to install the digesters.

“Brewer said he tested his system in other states, such as Wisconsin and Idaho, before shopping it around with California dairy farmers, whom he said were very skeptical,” the LA Times wrote. “He eventually signed his first contract with [a farmer]—‘Talk about apprehensive,’ Brewer recalled. ‘That was a little bit of an understatement.’”

Our buddy Ray wasn’t totally bullshitting—pardon the pun—in peddling his ideas. Anaerobic digesters are real machines that do convert animal waste into energy, and millions of dollars in federal and state money have been spent on the technology. However, questions remain around just how “green” this energy is and whether it’s worth the investment.

Brewer will be sentenced in June and faces up to 20 years in prison.

Source: Guy Embezzles Cool $9 Million From Poop-to-Energy Ponzi Scheme

You don’t own what you buy: Roald Dahl eBooks Censored Remotely after you bought them

“Owners of Roald Dahl ebooks are having their libraries automatically updated with the new censored versions containing hundreds of changes to language related to weight, mental health, violence, gender and race,” reports the British newspaper the Times. Readers who bought electronic versions of the writer’s books, such as Matilda and Charlie and the Chocolate Factory, before the controversial updates have discovered their copies have now been changed.

Puffin Books, the company which publishes Dahl novels, updated the electronic novels, in which Augustus Gloop is no longer described as fat or Mrs Twit as fearfully ugly, on devices such as the Amazon Kindle. Dahl’s biographer Matthew Dennison last night accused the publisher of “strong-arming readers into accepting a new orthodoxy in which Dahl himself has played no part.”
Meanwhile…

  • Children’s book author Frank Cottrell-Boyce admits in the Guardian that “as a child I disliked Dahl intensely. I felt that his snobbery was directed at people like me and that his addiction to revenge was not good. But that was fine — I just moved along.”

But Cottrell-Boyce’s larger point is “The key to reading for pleasure is having a choice about what you read” — and that childhood readers faces greater threats. “The outgoing children’s laureate Cressida Cowell has spent the last few years fighting for her Life-changing Libraries campaign. It’s making a huge difference but it would have a been a lot easier if our media showed a fraction of the interest they showed in Roald Dahl’s vocabulary in our children.”

Source: Roald Dahl eBooks Reportedly Censored Remotely – Slashdot

Signal says it will shut down in UK over Online Safety Bill, which wants to install spyware on all your devices

[…]

The Online Safety Bill contemplates bypassing encryption using device-side scanning to protect children from harmful material, and coincidentally breaking the security of end-to-end encryption at the same time. It’s currently being considered in Parliament and has been the subject of controversy for months.

[ something something saving children – that’s always a bad sign when they trot that one out ]

The legislation contains what critics have called “a spy clause.” [PDF] It requires companies to remove child sexual exploitation and abuse (CSEA) material or terrorist content from online platforms “whether communicated publicly or privately.” As applied to encrypted messaging, that means either encryption must be removed to allow content scanning or scanning must occur prior to encryption.

Signal draws the line

Such schemes have been condemned by technical experts and Signal is similarly unenthusiastic.

“Signal is a nonprofit whose sole mission is to provide a truly private means of digital communication to anyone, anywhere in the world,” said Meredith Whittaker, president of the Signal Foundation, in a statement provided to The Register.

“Many millions of people globally rely on us to provide a safe and secure messaging service to conduct journalism, express dissent, voice intimate or vulnerable thoughts, and otherwise speak to those they want to be heard by without surveillance from tech corporations and governments.”

“We have never, and will never, break our commitment to the people who use and trust Signal. And this means that we would absolutely choose to cease operating in a given region if the alternative meant undermining our privacy commitments to those who rely on us.”

Asked whether she was concerned that Signal could be banned under the Online Safety rules, Whittaker told The Register, “We were responding to a hypothetical, and we’re not going to speculate on probabilities. The language in the bill as it stands is deeply troubling, particularly the mandate for proactive surveillance of all images and texts. If we were given a choice between kneecapping our privacy guarantees by implementing such mass surveillance, or ceasing operations in the UK, we would cease operations.”

[…]

“If Signal withdraws its services from the UK, it will particularly harm journalists, campaigners and activists who rely on end-to-end encryption to communicate safely.”

[…]

 

Source: Signal says it will shut down in UK over Online Safety Bill

Google’s Play Store Privacy Labels Are a ‘Total Failure:’ Study

[…]

“There are two main problems here,” Mozilla’s Caltrider said. “The first problem is Google only requires the information in labels to be self-reported. So, fingers crossed, because it’s the honor system, and it turns out that most labels seem to be misleading.”

Google promises to make apps fix problems it finds in the labels, and threatens to ban apps that don’t get in compliance. But the company has never provided any details about how it polices apps. Google said it’s vigilant about enforcement but didn’t give any details about its enforcement process, and didn’t respond to a question about any enforcement actions it’s taken in the past.

[…]

Of course, Google could just read the privacy policies where apps spell out these practices, like Mozilla did, but there’s a bigger issue at play. These apps may not even be breaking Google’s privacy label rules, because those rules are so relaxed that “they let companies lie,” Caltrider said.

“That’s the second problem. Google’s own rules for what data practices you have to disclose are a joke,” Caltrider said. “The guidelines for the labels make them useless.”

If you go looking at Google’s rules for the data safety labels, which are buried deep in a cascading series of help menus, you’ll learn that there is a long list of things that you don’t have to tell your users about. In other words, you can say you don’t collect data or share it with third parties, while you do in fact collect data and share it with third parties.

For example, apps don’t have to disclose data sharing it if they have “consent” to share the data from users, or if they’re sharing the data with “service providers,” or if the data is “anonymized” (which is nonsense), or if the data is being shared for “specific legal purposes.” There are similar exceptions for what counts as data collection. Those loopholes are so big you could fill up a truck with data and drive it right on through.

[…]

Source: Google’s Play Store Privacy Labels Are a ‘Total Failure:’ Study

Which goes to show again, walled garden app stores really are no better than just downloading stuff from the internet, unless you’re the owner of the walled garden and collect 30% revenue for doing basically not much.

AI-created images lose U.S. copyrights in test for new technology

Images in a graphic novel that were created using the artificial-intelligence system Midjourney should not have been granted copyright protection, the U.S. Copyright Office said in a letter seen by Reuters.

“Zarya of the Dawn” author Kris Kashtanova is entitled to a copyright for the parts of the book Kashtanova wrote and arranged, but not for the images produced by Midjourney, the office said in its letter, dated Tuesday.

The decision is one of the first by a U.S. court or agency on the scope of copyright protection for works created with AI, and comes amid the meteoric rise of generative AI software like Midjourney, Dall-E and ChatGPT.

The Copyright Office said in its letter that it would reissue its registration for “Zarya of the Dawn” to omit images that “are not the product of human authorship” and therefore cannot be copyrighted.

The Copyright Office had no comment on the decision.

Kashtanova on Wednesday called it “great news” that the office allowed copyright protection for the novel’s story and the way the images were arranged, which Kashtanova said “covers a lot of uses for the people in the AI art community.”

Kashtanova said they were considering how best to press ahead with the argument that the images themselves were a “direct expression of my creativity and therefore copyrightable.”

Midjourney general counsel Max Sills said the decision was “a great victory for Kris, Midjourney, and artists,” and that the Copyright Office is “clearly saying that if an artist exerts creative control over an image generating tool like Midjourney …the output is protectable.”

Midjourney is an AI-based system that generates images based on text prompts entered by users. Kashtanova wrote the text of “Zarya of the Dawn,” and Midjourney created the book’s images based on prompts.

The Copyright Office told Kashtanova in October it would reconsider the book’s copyright registration because the application did not disclose Midjourney’s role.

The office said on Tuesday that it would grant copyright protection for the book’s text and the way Kashtanova selected and arranged its elements. But it said Kashtanova was not the “master mind” behind the images themselves.

“The fact that Midjourney’s specific output cannot be predicted by users makes Midjourney different for copyright purposes than other tools used by artists,” the letter said.

Source: AI-created images lose U.S. copyrights in test for new technology | Reuters

I am not sure why they are calling this a victory, as the court is basically reiterating that what she created is hers and what an AI created cannot be copyrighted by her or by the AI itself. That’s a loss for the AI.

MetaGuard: Going Incognito in the Metaverse

[…]

with numerous recent studies showing the ease at which VR users can be profiled, deanonymized, and data harvested, metaverse platforms carry all the privacy risks of the current internet and more while at present having none of the defensive privacy tools we are accustomed to using on the web. To remedy this, we present the first known method of implementing an “incognito mode” for VR. Our technique leverages local ε-differential privacy to quantifiably obscure sensitive user data attributes, with a focus on intelligently adding noise when and where it is needed most to maximize privacy while minimizing usability impact. Moreover, our system is capable of flexibly adapting to the unique needs of each metaverse application to further optimize this trade-off. We implement our solution as a universal Unity (C#) plugin that we then evaluate using several popular VR applications. Upon faithfully replicating the most well known VR privacy attack studies, we show a significant degradation of attacker capabilities when using our proposed solution.

[…]

Source: MetaGuard: Going Incognito in the Metaverse | Berkeley RDI

3 motion points allow you to be identified within seconds in VR

[..]

In a paper provided to The Register in advance of its publication on ArXiv, academics Vivek Nair, Wenbo Guo, Justus Mattern, Rui Wang, James O’Brien, Louis Rosenberg, and Dawn Song set out to test the extent to which individuals in VR environments can be identified by body movement data.

The boffins gathered telemetry data from more than 55,000 people who played Beat Saber, a VR rhythm game in which players wave hand controllers to music. Then they digested 3.96TB of data, from game leaderboard BeatLeader, consisting of 2,669,886 game replays from 55,541 users during 713,013 separate play sessions.

These Beat Saber Open Replay (BSOR) files contained metadata (devices and game settings), telemetry (measurements of the position and orientation of players’ hands, head, and so on), context info (type, location, and timing of in-game stimuli), and performance stats (responses to in-game stimuli).

From this, the researchers focused on the data derived from the head and hand movements of Beat Saber players. Just five minutes of those three data points proved enough to train a classification model that, given 100 minutes of motion data from the game, could uniquely identify the player 94 percent of the time. And with just 10 seconds of motion data, the classification model managed accuracy of 73 percent.

“The study demonstrates that over 55k ‘anonymous’ VR users can be de-anonymized back to the exact individual just by watching their head and hand movements for a few seconds,” said Vivek Nair, a UC Berkeley doctoral student and one of the authors of the paper, in an email to The Register.

“We have known for a long time that motion reveals information about people, but what this study newly shows is that movement patterns are so unique to an individual that they could serve as an identifying biometric, on par with facial or fingerprint recognition. This really changes how we think about the notion of ‘privacy’ in the metaverse, as just by moving around in VR, you might as well be broadcasting your face or fingerprints at all times!”

[…]

“There have been papers as early as the 1970s which showed that individuals can identify the motion of their friends,” said Nair. “A 2000 paper from Berkeley even showed that with motion capture data, you can recreate a model of a person’s entire skeleton.”

“What hasn’t been shown, until now, is that the motion of just three tracked points in VR (head and hands) is enough to identify users on a huge (and maybe even global) scale. It’s likely true that you can identify and profile users with even greater accuracy outside of VR when more tracked objects are available, such as with full-body tracking that some 3D cameras are able to do.”

[…]

Nair said he remains optimistic about the potential of systems like MetaGuard – a VR incognito mode project he and colleagues have been working on – to address privacy threats by altering VR in a privacy-preserving way rather than trying to prevent data collection.

The paper suggests similar data defense tactics: “We hope to see future works which intelligently corrupt VR replays to obscure identifiable properties without impeding their original purpose (e.g., scoring or cheating detection).”

One reason to prefer data alteration over data denial is that there may be VR applications (e.g., motion-based medical diagnostics) that justify further investment in the technology, as opposed to propping up pretend worlds just for the sake of privacy pillaging.

[…]

Source: How virtual reality telemetry is the next threat to privacy • The Register

Google’s wants Go reporting telemetry data by default

Russ Cox, a Google software engineer steering the development of the open source Go programming language, has presented a possible plan to implement telemetry in the Go toolchain.

However many in the Go community object because the plan calls for telemetry by default.

These alarmed developers would prefer an opt-in rather than an opt-out regime, a position the Go team rejects because it would ensure low adoption and would reduce the amount of telemetry data received to the point it would be of little value.

Cox’s proposal summarized lengthier documentation in three blog posts.

Telemetry, as Cox describes it, involves software sending data from Go software to a server to provide information about which functions are being used and how the software is performing. He argues it is beneficial for open source projects to have that information to guide development.

“I believe that open-source software projects need to explore new telemetry designs that help developers get the information they need to work efficiently and effectively, without collecting invasive traces of detailed user activity,” he wrote.

[…]

Some people believe they have a right to privacy, to be left alone, and to demand that their rights are respected through opt-in consent.

As developer Louis Thibault put it, “The Go dev team seems not to have internalized the principle of affirmative consent in matters of data collection.”

Others, particularly in the ad industry, but in other endeavors as well, see opt-in as an existential threat. They believe that they have a right to gather data and that it’s better to seek forgiveness via opt-out than to ask for permission unlikely to be given via opt-in.

Source: Google’s Go may add telemetry reporting that’s on by default • The Register

Windows 11 Sends Tremendous Amount of User Data to Third Parties – pretty much spyware for loads of people!

Many programs collect user data and send it back to their developers to improve software or provide more targeted services. But according to the PC Security Channel (via Neowin (opens in new tab)) Microsoft’s Windows 11 sends data not only to the Redmond, Washington-based software giant, but also to multiple third parties.

To analyze DNS traffic generated by a freshly installed copy of Windows 11 on a brand-new notebook, the PC Security Channel used the Wireshark network protocol analyzer that reveals precisely what is happening on a network. The results were astounding enough for the YouTube channel to call Microsoft’s Windows 11 “spyware.”

As it turned out, an all-new Windows 11 PC that was never used to browse the Internet contacted not only Windows Update, MSN and Bing servers, but also Steam, McAfee, geo.prod.do, and Comscore ScorecardResearch.com. Apparently, the latest operating system from Microsoft collected and sent telemetry data to various market research companies, advertising services, and the like.

To prove the point, the PC Security Channel tried to find out what Windows XP contacted after a fresh install using the same tool and it turned out that the only things that the 20+ years old operating system contacted were Windows Update and Microsoft Update servers.

“As with any modern operating system, users can expect to see data flowing to help them remain secure, up to date, and keep the system working as anticipated,” a Microsoft spokesperson told Tom’s Hardware. “We are committed to transparency and regularly publish information about the data we collect to empower customers to be more informed about their privacy.”

Some of the claims may be, technically, overblown. Telemetry data is mentioned in Windows’ terms of service, which many people skip over to use the operating system. And you can choose not to enable at least some of this by turning off settings the first time to boot into the OS.

“By accepting this agreement and using the software you agree that Microsoft may collect, use, and disclose the information as described in the Microsoft Privacy Statement (aka.ms/privacy), and as may be described in the user interface associated with the software features,” the terms of service read (opens in new tab). It also points out that some data-sharing settings can be turned off.

Obviously, a lot has changed in 20 years and we now use more online services than back in the early 2000s. As a result, various telemetry data has to be sent online to keep certain features running. But at the very least, Microsoft should do a better job of expressly asking for consent and stating what will be sent and where, because you can’t opt out of all of the data-sharing “features.” The PC Security Channel warns that even when telemetry tracking is disabled by third-party utilities, Windows 11 still sends certain data.

Source: Windows 11 Sends Tremendous Amount of User Data to Third Parties, YouTuber Claims (Update) | Tom’s Hardware

Just when you thought Microsoft was the good guys again and it was all Google, Apple, Amazon, Meta/Facebook being evil they are back at it to prove they still have it!

Microsoft won’t access private data in Office version scan installed as OS update they say

Microsoft wants everyone to know that it isn’t looking to invade their privacy while looking through their Windows PCs to find out-of-date versions of Office software.

In its KB5021751 update last month, Microsoft included a plan to scan Windows systems to smoke out those Office versions that are no longer supported or nearing the end of support. Those include Office 2007 (which saw support end in 2017) and Office 2010 (in 2020) and the 2013 build (this coming April).

The company stressed that it would run only one time and would not install anything on the user’s Windows system, adding that the file for the update is scanned to ensure it’s not infected by malware and is stored on highly secure servers to prevent unauthorized changes to it.

The update caused some discussion among users, at least enough to convince Microsoft to make another pitch that it is respecting user privacy and won’t access private data despite scanning their systems.

The update collects diagnostic and performance data so that it can determine the use of various versions of Office and how to best support and service them, the software maker wrote in an expanded note this week. The update will silently run once to collect the data and no files are left on the user’s systems once the scan is completed.

“This data is gathered from registry entries and APIs,” it wrote. “The update does not gather licensing details, customer content, or data about non-Microsoft products. Microsoft values, protects, and defends privacy.”

[…]

Source: Microsoft won’t access private data in Office version scan • The Register

Of course, just sending data about what version of Office is installed is in fact sending private data about stuff installed on your PC. This is Not OK.

Claims Datadog asked developer to kill open source data tool, which he did. And now he’s ressurected it.

After a delay of over a year, an open source code contribution to enable the export of data from Datadog’s Application Performance Monitoring (APM) platform finally got merged on Tuesday into a collection of OpenTelemetry components.

The reason for the delay, according to John Dorman, the software developer who wrote the Datadog APM Receiver code, is that, about a year ago, Datadog asked him not to submit the software.

On February 8 last year Dorman, who goes by the name “boostchicken” on GitHub, announced that he was closing his pull request – the git term for programming code contributed to a project.

“After some consideration I’ve decided to close this PR [pull request],” he wrote. “[T]here are better ways to OTEL [OpenTelemetry] support w/ Datadog.”

Members of the open source community who are focused on application monitoring – collecting and analyzing logs, traces of app activity, and other metrics that can be useful to keep applications running – had questions, claiming that DataDog prefers to lock customers into their product.

Shortly after the post, Charity Majors, CEO of Honeycomb.io, a rival application monitoring firm, wrote a Twitter thread elaborating on the benefits of OpenTelemetry and calling out Datadog for only supporting OTEL as a one-way street.

“Datadog has been telling users they can use OTEL to get data in, but not get data out,” Majors wrote. “The Datadog OTEL collector PR was silently killed. The person who wrote it appears to have been pressured into closing it, and nothing has been proposed to replace it.”

Behavior of this sort would be inconsistent with the goals of the Cloud Native Computing Foundation’s (CNCF) OpenTelemetry project, which seeks “to provide a set of standardized vendor-agnostic SDKs, APIs, and tools for ingesting, transforming, and sending data to an Observability back-end (i.e. open source or commercial vendor).”

That is to say, the OpenTelemetry project aims to promote data portability, instead of hindering it, as is common among proprietary software vendors.

The smoking hound

On January 26 Dorman confirmed suspicions that he had been approached by Datadog and asked not to proceed with his efforts.

“I owe the community an apology on this one,” Dorman wrote in his pull request thread. “I lacked the courage of my convictions and when push came to shove and I had to make the hard choice, I took the easy way out.”

“Datadog ‘asked’ me to kill this pull request. There were other members from my organization present that let me know this answer will be a ‘ok’. I am sure I could have said no, at the moment I just couldn’t fathom opening Pandora’s Box. There you have it, no NDA, no stack of cash. I left the code hoping someone could carry on. I was willing to give [Datadog] this code, no strings attached as long as it moved OTel forward. They declined.”

He added, “However, I told them if you don’t support OpenTelemetry in a meaningful way, I will start sending pull requests again. So here we are. I feel I have given them enough time to do the right thing.”

Indeed, Dorman subsequently re-opened his pull request, which on Tuesday was merged into the repository for Open Telemetry Collector components. His Datadog ARM Receiver can ingest traces in the Datadog Trace Agent Format.

Coincidentally, Datadog on Tuesday published a blog post titled, “Datadog’s commitment to OpenTelemetry and the open source community.” It makes no mention of the alleged request to “kill [the] pull request.” Instead, it enumerates various ways in which the company has supported OpenTelemetry recently.

The Register asked Datadog for comment. We’ve not heard back.

Dorman, who presently works for Meta, did not respond to a request for comment. However, last week, via Twitter, he credited Grafana, an open source Datadog competitor, for having “formally sponsored” the work and for pointing out that Datadog “refuses to support OTEL in meaningful ways.”

The OpenTelemetry Governance Committee for the CNCF provided The Register with the following statement:

“We’re still trying to make sense of what happened here; we’ll comment on it once we have a full understanding. Regardless, we are happy to review and accept any contributions which push the project forward, and this [pull request] was merged yesterday,” it said.

Source: Claims Datadog asked developer to kill open source data tool • The Register

FTC Fines GoodRx $1.5M for Sending Medication Data to Facebook, Google, others

The Federal Trade Commission took historic action against the medication discount service GoodRx Wednesday, issuing a $1.5 million fine against the company for sharing data about users’ prescriptions with Facebook, Google, and others. It’s a move that could usher in a new era of health privacy in the United States.

“Digital health companies and mobile apps should not cash in on consumer’s extremely sensitive and personally identifiable health information,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement.

[…]

In addition to a fine, GoodRx has agreed to a first-of-its-kind provision banning the company from sharing health data with third parties for advertising purposes. That may sound unsurprising, but many consumers don’t realize that health privacy laws generally don’t apply to companies that aren’t affiliated with doctors or insurance companies.

[…]

GoodRx is a health technology company that gives out free coupons for discounts on common medications. The company also connects users with healthcare providers for telehealth visits. GoodRx also shared data about the prescriptions you’re buying and looking up with third-party advertising companies, which incurred the ire of the FTC.

GoodRx’s privacy problems were first uncovered by this reporter in an investigation with Consumer Reports, followed by a similar report in Gizmodo. At the time, if you looked up Viagra, Prozac, PrEP, or any other medication, GoodRx would tell Facebook, Google, and a variety of companies in the ad business, such as Criteo, Branch, and Twilio. GoodRx wasn’t selling the data. Instead, it shared the information so those companies could help GoodRx target its own customers with ads for more drugs.

[…]

Source: FTC Fines GoodRx $1.5M for Sending Medication Data to Facebook

An AI robot lawyer was set to argue in court. Scared lawyers shut it down with jail threats

A British man who planned to have a “robot lawyer” help a defendant fight a traffic ticket has dropped the effort after receiving threats of possible prosecution and jail time.

Joshua Browder, the CEO of the New York-based startup DoNotPay, created a way for people contesting traffic tickets to use arguments in court generated by artificial intelligence.

Here’s how it was supposed to work: The person challenging a speeding ticket would wear smart glasses that both record court proceedings and dictate responses into the defendant’s ear from a small speaker. The system relied on a few leading AI text generators, including ChatGPT and DaVinci.

The first-ever AI-powered legal defense was set to take place in California on Feb. 22, but not anymore.

As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in.

“Multiple state bars have threatened us,” Browder said. “One even said a referral to the district attorney’s office and prosecution and prison time would be possible.”

In particular, Browder said one state bar official noted that the unauthorized practice of law is a misdemeanor in some states punishable up to six months in county jail.

“Even if it wouldn’t happen, the threat of criminal charges was enough to give it up,” he said. “The letters have become so frequent that we thought it was just a distraction and that we should move on.”

State bar organizations license and regulate attorneys, as a way to ensure people hire lawyers who understand the law.

Browder refused to cite which state bar in particular sent letters, and what official made the threat of possible prosecution, saying his startup, DoNotPay, is under investigation by multiple state bars, including California’s.

[…]

“The truth is, most people can’t afford lawyers,” he said. “This could’ve shifted the balance and allowed people to use tools like ChatGPT in the courtroom that maybe could’ve helped them win cases.”

The future of robot lawyers faces uncertainty for another reason that is far simpler than the bar officials’ existential questions: courtroom rules.

Recording audio during a live legal proceeding is not permitted in federal court and is often prohibited in state courts. The AI tools developed by DoNotPay, which remain completely untested in actual courtrooms, require recording audio of arguments in order for the machine-learning algorithm to generate responses.

“I think calling the tool a ‘robot lawyer’ really riled a lot of lawyers up,” Browder said. “But I think they’re missing the forest for the trees. Technology is advancing and courtroom rules are very outdated.”

 

Source: An AI robot lawyer was set to argue in court. Real lawyers shut it down. : NPR

Lawyers protecting their own at the cost of the population? Who’d have thunk it?

Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator

Meta’s (META.O) WhatsApp subsidiary was fined 5.5 million euros ($5.95 million) on Thursday by Ireland’s Data Privacy Commissioner (DPC), its lead EU privacy regulator, for an additional breach of the bloc’s privacy laws.

The DPC also told WhatsApp to reassess how it uses personal data for service improvements following a similar order it issued this month to Meta’s other main platforms, Facebook and Instagram, which stated Meta must reassess the legal basis upon which it targets advertising through the use of personal data.

[…]

Source: Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator | Reuters

Google Accused of Creating Digital Ad Monopoly in New Justice Dept. Suit

The Department of Justice filed a lawsuit against Google Tuesday, accusing the tech giant of using its market power to create a monopoly in the digital advertising business over the course of 15 years.

Google “corrupted legitimate competition in the ad tech industry by engaging in a systematic campaign to seize control of the wide swath of high-tech tools used by publishers, advertisers and brokers, to facilitate digital advertising,” the Justice Department alleges. Eight state attorneys general joined in the suit, filed in Virginia federal court. Google has faced five antitrust suits since 2020.

[…]

Source: Google Accused of Digital Ad Monopoly in New Justice Dept. Suit

Indian Android Users Can Finally Use Alternate Search and Payment Methods and forked Google apps

Android users in India will soon have more control over their devices, thanks to a court ruling. Beginning next month, Indian Android wielders can choose a different billing system when paying for apps and in-app smartphone purchases rather than default to going through the Play Store. Google will also allow Indian users to select a different search engine as their default right as they set up a new device, which might have implications for upcoming EU regulations.

The move comes after a ruling last week by India’s Supreme Court. The trial started late last year when the Competition Commission of India (CCI) fined Google $161 million for imposing restrictions on its manufacturing partners. Google attempted to challenge the order by maintaining this kind of practice would stall the Android ecosystem and that “no other jurisdiction has ever asked for such far-reaching changes.”

[…]

Google also won’t be able to require the installation of its branded apps to grant the license for running Android OS anymore. From now on, device manufacturers in India will be able to license “individual Google apps” as they like for pre-installation rather than needing to bundle the whole kit and caboodle. Google is also updating the Android compatibility requirements for its OEM partners to “build non-compatible or forked variants.”

[…]

Of particular note is seeing how users will react to being able to choose whether to buy apps and other in-app purchases through the Play Store, where Google takes a 30% cut from each transaction, or through an alternative billing service like JIO Money or Paytm—or even Amazon Pay, available in India.

[…]

The Department of Justice in the United States is also suing Google’s parent company, Alphabet, for a second time this week for practices within its digital advertising business, alleging that the company “corrupted legitimate competition in the ad tech industry” to build out its monopoly.

Source: Indian Android Users Can Finally Use Alternate Search and Payment Methods

US law enforcement has warrantless access to many money transfers

Your international money transfers might not be as discreet as you think. Senator Ron Wyden and The Wall Street Journal have learned that US law enforcement can access details of money transfers without a warrant through an obscure surveillance program the Arizona attorney general’s office created in 2014. A database stored at a nonprofit, the Transaction Record Analysis Center (TRAC), provides full names and amounts for larger transfers (above $500) sent between the US, Mexico and 22 other regions through services like Western Union, MoneyGram and Viamericas. The program covers data for numerous Caribbean and Latin American countries in addition to Canada, China, France, Malaysia, Spain, Thailand, Ukraine and the US Virgin Islands. Some domestic transfers also enter the data set.

[…]

The concern, of course, is that officials can obtain sensitive transaction details without court oversight or customers’ knowledge. An unscrupulous officer could secretly track large transfers. Wyden adds that the people in the database are more likely to be immigrants, minorities and low-income residents who don’t have bank accounts and already have fewer privacy protectoins. The American Civil Liberties Union also asserts that the subpoenas used to obtain this data violate federal law. Arizona issued at least 140 of these subpoenas between 2014 and 2021.

[…]

Source: US law enforcement has warrantless access to many money transfers | Engadget

Meta sues surveillance company for allegedly scraping more than 600,000 accounts – pots and kettles

Meta has filed a lawsuit against Voyager Labs, which it has accused of creating tens of thousands of fake accounts to scrape data from more than 600,000 Facebook users’ profiles. It says the surveillance company pulled information such as posts, likes, friend lists, photos, and comments, along with other details from groups and pages. Meta claims that Voyager masked its activity using its Surveillance Software, and that the company has also scraped data from Instagram, Twitter, YouTube, LinkedIn and Telegram to sell and license for profit.

In the complaint, which was obtained by Gizmodo, Meta has asked a judge to permanently ban Voyager from Facebook and Instagram. “As a direct result of Defendant’s unlawful actions, Meta has suffered and continues to suffer irreparable harm for which there is no adequate remedy at law, and which will continue unless Defendant’s actions are enjoined,” the filing reads. Meta said Voyager’s actions have caused it “to incur damages, including investigative costs, in an amount to be proven at trial.”

Meta claims that Voyager scraped data from accounts belonging to “employees of non-profit organizations, universities, news media organizations, healthcare facilities, the armed forces of the United States, and local, state, and federal government agencies, as well as full-time parents, retirees, and union members.” The company noted in a blog post it disabled accounts linked to Voyager and that filed the suit to enforce its terms and policies.

[…]

In 2021, The Guardian reported that the Los Angeles Police Department had tested Voyager’s social media surveillance tools in 2019. The company is said to have told the department that police could use the software to track the accounts of a suspect’s friends on social media, and that the system could predict crimes before they took place by making assumptions about a person’s activity.

According to The Guardian, Voyager has suggested factors like Instagram usernames denoting Arab pride or tweeting about Islam could indicate someone is leaning toward extremism. Other companies, such as Palantir, have worked on predictive policing tech. Critics such as the Electronic Frontier Foundation claim that tech can’t predict crime and that algorithms merely perpetuate existing biases.

Data scraping is an issue that Meta has to take seriously. In 2021, it sued an individual for allegedly scraping data on more than 178 million users. Last November, the Irish Data Protection Commission fined the company €265 million ($277 million) for failing to stop bad actors from obtaining millions of people’s phone numbers and other data, which were published elsewhere online. The regulator said Meta failed to comply with GDPR data protection rules.

Source: Meta sues surveillance company for allegedly scraping more than 600,000 accounts | Engadget

Google will pay $9.5 million to settle Washington DC AG’s location-tracking lawsuit

Google has agreed to pay $9.5 million to settle a lawsuit brought by Washington DC Attorney General Karl Racine, who accused the company earlier this year of “deceiving users and invading their privacy.” Google has also agreed to change some of its practices, primarily concerning how it informs users about collecting, storing and using their location data.

“Google leads consumers to believe that consumers are in control of whether Google collects and retains information about their location and how that information is used,” the complaint, which Racine filed in January, read. “In reality, consumers who use Google products cannot prevent Google from collecting, storing and profiting from their location.”

Racine’s office also accused Google of employing “dark patterns,” which are design choices intended to deceive users into carrying out actions that don’t benefit them. Specifically, the AG’s office claimed that Google repeatedly prompted users to switch in location tracking in certain apps and informed them that certain features wouldn’t work properly if location tracking wasn’t on. Racine and his team found that location data wasn’t even needed for the app in question. They asserted that Google made it “impossible for users to opt out of having their location tracked.”

 

The $9.5 million payment is a paltry one for Google. Last quarter, it took parent company Alphabet under 20 minutes to make that much in revenue. The changes that the company will make to its practices as part of the settlement may have a bigger impact.

Folks who currently have certain location settings on will receive notifications telling them how they can disable each setting, delete the associated data and limit how long Google can keep that information. Users who set up a new Google account will be informed which location-related account settings are on by default and offered the chance to opt out.

Google will need to maintain a webpage that details its location data practices and policies. This will include ways for users to access their location settings and details about how each setting impacts Google’s collection, retention or use of location data.

Moreover, Google will be prevented from sharing a person’s precise location data with a third-party advertiser without the user’s explicit consent. The company will need to delete location data “that came from a device or from an IP address in web and app activity within 30 days” of obtaining the information

[…]

Source: Google will pay $9.5 million to settle Washington DC AG’s location-tracking lawsuit | Engadget