ICE, CBP, Secret Service All Illegally Used Smartphone Location Data

In a bombshell report, an oversight body for the Department of Homeland Security (DHS) found that Immigration and Customs Enforcement (ICE), Customs and Border Enforcement (CBP), and the Secret Service all broke the law while using location data harvested from ordinary apps installed on smartphones. In one instance, a CBP official also inappropriately used the technology to track the location of coworkers with no investigative purpose. For years U.S. government agencies have been buying access to location data through commercial vendors, a practice which critics say skirts the Fourth Amendment requirement of a warrant. During that time, the agencies have typically refused to publicly explain the legal basis on which they based their purchase and use of the data. Now, the report shows that three of the main customers of commercial location data broke the law while doing so, and didn’t have any supervisory review to ensure proper use of the technology. The report also recommends that ICE stop all use of such data until it obtains the necessary approvals, a request that ICE has refused.

The report, titled “CBP, ICE, and Secret Service Did Not Adhere to Privacy Policies or Develop Sufficient Policies Before Procuring and Using Commercial Telemetry Data,” is dated September 28, 2023, and comes from Joseph V. Cuffari, the Inspector General for DHS. The report was originally marked as “law enforcement sensitive,” but the Inspector General has now released it publicly.

Source: ICE, CBP, Secret Service All Illegally Used Smartphone Location Data – Slashdot

EPIC urges FTC to investigate Grindr’s data practices

On Wednesday, EPIC filed a complaint with the US government watchdog over Grindr’s “apparent failure to safeguard users’ sensitive personal data.” This includes both present and past users who have since deleted their accounts, according to the complaint. Despite promising in its privacy policy to delete personal info if customers remove their account, Grindr allegedly retained and disclosed some of this data to third parties.

Considering that people trust the dating app with a ton of very sensitive information — this includes their sexual preferences, self-reported HIV status, chat history, photos including nudes, and location information — “learning that Grindr breaks the promises it makes to users would likely affect a consumer’s decision regarding whether to use Grindr,” the complaint states [PDF].

Grindr, for its part, says privacy is of the uppermost importance to it, and that these “unfounded” claims stem from allegations made by a disgruntled ex-worker. So that’s all right then.

“Privacy is a top priority for Grindr and the LGBTQ+ community we serve, and we have adopted industry-leading privacy practices and tools to protect and empower our users,” a spokesperson told The Register.

“We are sorry that the former employee behind the unfounded allegations in today’s request is dissatisfied with his departure from the company; we wish him the best.”

The former employee in question is Grindr’s ex-chief privacy officer Ron De Jesus. In June, De Jesus filed a wrongful termination lawsuit [PDF] against his former bosses that also accused the dating app of violating privacy laws.

According to the lawsuit, De Jesus was “leading the charge to keep Grindr compliant with state, national, and international laws” after Norway’s data protection agency fined the dating app biz about $12 million in December 2021 and a Wall Street Journal article in May 2022 accused the application developer of selling users’ location data.

But despite De Jesus’ attempts, “Grindr placed profit over privacy and got rid of Mr De Jesus for his efforts and reports,” the lawsuit alleges.

EPIC’s complaint, which highlights De Jesus’ allegations, asks the FTC to look into potential violations of privacy law, including detection data retention and disclosure practices.

It also accuses Grindr of violating the Health Breach Notification Rule (HNBR). The dating app is subject to the HNBR because it asks users to self-report health data including HIV status, last-tested date, and vaccination status. By sharing these records with third parties and retaining health data after users deleted their accounts, Grindr allegedly breached the HNBR, EPIC says.

The privacy advocates at EPIC want the FTC to make Grindr comply with the laws and stop any “unlawful or impermissible” data retention practices. Additionally, the complaint calls on the federal agency to force Grindr to notify any users’ whose data was misused, and impose fines against the dating app for any violations of the HBNR.

Source: EPIC urges FTC to investigate Grindr’s data practices • The Register

Publishing A Book Means No Longer Having Control Over How Others Feel About It, Or How They’re Inspired By It. And That Includes AI.

[…]

I completely understand why some authors are extremely upset about finding out that their works were used to train AI. It feels wrong. It feels exploitive. (I do not understand their lawsuits, because I think they’re very much confused about how copyright law works. )

But, to me, many of the complaints about this amount to a similar discussion to ones we’ve had in the past, regarding concerns about if works were released without copyright, what would happen if someone “bad” reused them. This sort of thought experiment is silly, because once a work is released and enters the messy real world, it’s entirely possible for things to happen that the original creator disagrees with or hates. Someone can interpret the work in ridiculous ways. Or it can inspire bad people to do bad things. Or any of a long list of other possibilities.

The original author has the right to speak up about the bad things, or to denounce the bad people, but the simple fact is that once you’ve released a work into the world, the original author no longer has control over how that work is used and interpreted by the world. Releasing a work into the world is an act of losing control over that work and what others can do in response to it. Or how or why others are inspired by it.

But, when it comes to the AI fights, many are insisting that they want to do exactly that around AI, and much of this came to a head recently when The Atlantic released a tool that allowed anyone to search to see which authors were included in the Books3 dataset (one of multiple collections of books that have been used to train AI). This lead to a lot of people (both authors and non-authors) screaming about the evils of AI, and about how wrong it was that such books were included.

But, again, that’s the nature of releasing a work to the public. People read it. Machines might also read it. And they might use what they learn in that work to do something else. And you might like that and you might not, but it’s not really your call.

That’s why I was happy to see Ian Bogost publish an article explaining why he’s happy that his books were found in Books3, saying what those two other authors I spoke to wouldn’t say publicly. Ian is getting screamed at all over social media for this article, with most of it apparently based on the title and not on the substance. But it’s worth reading.

Whether or not Meta’s behavior amounts to infringement is a matter for the courts to decide. Permission is a different matter. One of the facts (and pleasures) of authorship is that one’s work will be used in unpredictable ways. The philosopher Jacques Derrida liked to talk about “dissemination,” which I take to mean that, like a plant releasing its seed, an author separates from their published work. Their readers (or viewers, or listeners) not only can but must make sense of that work in different contexts. A retiree cracks a Haruki Murakami novel recommended by a grandchild. A high-school kid skims Shakespeare for a class. My mother’s tree trimmer reads my book on play at her suggestion. A lack of permission underlies all of these uses, as it underlies influence in general: When successful, art exceeds its creator’s plans.

But internet culture recasts permission as a moral right. Many authors are online, and they can tell you if and when you’re wrong about their work. Also online are swarms of fans who will evangelize their received ideas of what a book, a movie, or an album really means and snuff out the “wrong” accounts. The Books3 imbroglio reflects the same impulse to believe that some interpretations of a work are out of bounds.

Perhaps Meta is an unappealing reader. Perhaps chopping prose into tokens is not how I would like to be read. But then, who am I to say what my work is good for, how it might benefit someone—even a near-trillion-dollar company? To bemoan this one unexpected use for my writing is to undermine all of the other unexpected uses for it. Speaking as a writer, that makes me feel bad.

More importantly, Bogost notes that the entire point of Books3 originally was to make sure that AI wasn’t just controlled by corporate juggernauts:

The Books3 database was itself uploaded in resistance to the corporate juggernauts. The person who first posted the repository has described it as the only way for open-source, grassroots AI projects to compete with huge commercial enterprises. He was trying to return some control of the future to ordinary people, including book authors. In the meantime, Meta contends that the next generation of its AI model—which may or may not still include Books3 in its training data—is “free for research and commercial use,” a statement that demands scrutiny but also complicates this saga. So does the fact that hours after The Atlantic published a search tool for Books3, one writer distributed a link that allows you to access the feature without subscribing to this magazine. In other words: a free way for people to be outraged about people getting writers’ work for free.

I’m not sure what I make of all this, as a citizen of the future no less than as a book author. Theft is an original sin of the internet. Sometimes we call it piracy (when software is uploaded to USENET, or books to Books3); other times it’s seen as innovation (when Google processed and indexed the entire internet without permission) or even liberation. AI merely iterates this ambiguity. I’m having trouble drawing any novel or definitive conclusions about the Books3 story based on the day-old knowledge that some of my writing, along with trillions more chunks of words from, perhaps, Amazon reviews and Reddit grouses, have made their way into an AI training set.

I get that it feels bad that your works are being used in ways you disapprove of, but that is the nature of releasing something into the world. And the underlying point of the Books3 database is to spread access to information to everyone. And that’s a good thing that should be supported, in the nature of folks like Aaron Swartz.

It’s the same reason why, even as lots of news sites are proactively blocking AI scanning bots, I’m actually hoping that more of them will scan and use Techdirt’s words to do more and to be better. The more information shared, the more we can do with it, and that’s a good thing.

I understand the underlying concerns, but that’s just part of what happens when you release a work to the world. Part of releasing something into the world is coming to terms with the fact that you no longer own how people will read it or be inspired by it, or what lessons they will take from it.

 

Source: Publishing A Book Means No Longer Having Control Over How Others Feel About It, Or How They’re Inspired By It. And That Includes AI. | Techdirt

JuMBOs Planets – but without stars to orbit. So not planets according to definition

A team of astronomers have detected over 500 planet-like objects in the inner Orion Nebula and the Trapezium Cluster that they believe could shake up the very definition of a planet.

The 4-light-year-wide Trapezium Cluster sits at the heart of the Orion Nebula, or Messier 42, about 1,400 light-years from Earth. The cluster is filled with young stars, which make their surrounding gas and dust glow with infrared light.

The Webb Space Telescope’s Near-Infrared Camera (NIRCam) observed the nebula at short and long wavelengths for nearly 35 hours between September 26, 2022, and October 2, 2022, giving researchers a remarkably sharp look at relatively small (meaning Jupiter-sized and smaller) isolated objects in the nebula. These NIRCam images are some of the largest mosaics from the telescope to date, according to a European Space Agency release. Though they cannot be hosted in all their resolved glory on this site, you can check them out on the ESASky application.

A planet, per NASA, is an object that orbits a star and is large enough to have taken on a spherical shape and have cast away other objects near its size from its orbit. According to the recent team, the Jupiter-mass binary objects (or JuMBOs) are big enough to be planetary but don’t have a star they’re clearly orbiting. Using Webb, the researchers also observed low-temperature planetary-mass objects (or PMOs). The team’s results are yet to be peer-reviewed but are currently hosted on the preprint server arXiv.

[…]

In the preprint, the team describes 540 planetary mass candidates, with the smallest masses clocking in at about 0.6 times the mass of Jupiter. According to The Guardian, analysis revealed steam and methane in the JuMBOs’ atmospheres. The researchers also found that 9% of those objects are in wide binaries, equivalent to 100 times the distance between Earth and the Sun or more. That finding is perplexing, because objects of JuMBOs’ masses typically orbit a star. In other words, the JuMBOs look decidedly planet-like but lack a key characteristic of planets.

[…]

So what are the JuMBOs? It’s still not clear whether the objects form like planets—by accreting the gas and dust from a protoplanetary disk following a star’s formation—or more like the stars themselves. The Trapezium Cluster’s stars are quite young; according to the STScI release, if our solar system were a middle-aged person, the cluster’s stars would be just three or four days old. It’s possible that objects like the JuMBOs are actually common in the universe, but Webb is the first observatory that has the ability to pick out the individual objects.

[…]

Source: Quasi-Planets Called JuMBOs Are Bopping Around in Space

Arm patches Mali GPU driver bug exploited by spyware

Commercial spyware has exploited a security hole in Arm’s Mali GPU drivers to compromise some people’s devices, according to Google today.

These graphics processors are used in a ton of gear, from phones and tablets to laptops and cars, so the kernel-level vulnerability may be present in countless equipment. This includes Android handsets made by Google, Samsung, and others.

The vulnerable drivers are paired with Arm’s Midgard (launched in 2010), Bifrost (2016), Valhall (2019), and fifth generation Mali GPUs (2023), so we imagine this buggy code will be in millions of systems.

On Monday, Arm issued an advisory for the flaw, which is tracked as CVE-2023-4211. This is a use-after-free bug affecting Midgard driver versions r12p0 to r32p0; Bifrost versions r0p0 to r42p0; Valhall versions r19p0 to r42p0; and Arm 5th Gen GPU Architecture versions r41p0 to r42p0.

We’re told Arm has corrected the security blunder in its drivers for Bifrost to fifth-gen. “This issue is fixed in Bifrost, Valhall, and Arm 5th Gen GPU Architecture Kernel Driver r43p0,” the advisory stated. “Users are recommended to upgrade if they are impacted by this issue. Please contact Arm support for Midgard GPUs.”

We note version r43p0 of Arm’s open source Mali drivers for Bifrost to fifth-gen were released in March. Midgard has yet to publicly get that version, it appears, hence why you need to contact Arm for that. We’ve asked Arm for more details on that.

What this means for the vast majority of people is: look out for operating system or manufacturer updates with Mali GPU driver fixes to install to close this security hole, or look up the open source drivers and apply updates yourself if you’re into that. Your equipment may already be patched by now, given the release in late March, and details of the bug are only just coming out. If you’re a device maker, you should be rolling out patches to customers.

“A local non-privileged user can make improper GPU memory processing operations to gain access to already freed memory,” is how Arm described the bug. That, it seems, is enough to allow spyware to take hold of a targeted vulnerable device.

According to Arm there is “evidence that this vulnerability may be under limited, targeted exploitation.” We’ve received confirmation from Google, whose Threat Analysis Group’s (TAG) Maddie Stone and Google Project Zero’s Jann Horn found and reported the vulnerability to the chip designer, that this targeted exploitation has indeed taken place.

“At this time, TAG can confirm the CVE was used in the wild by a commercial surveillance vendor,” a TAG spokesperson told The Register. “More technical details will be available at a later date, aligning with our vulnerability disclosure policy.”

[…]

 

Source: Arm patches Mali GPU driver bug exploited by spyware • The Register

Amazon Used Secret ‘Project Nessie’ Algorithm To Raise Price

Amazon used an algorithm code-named “Project Nessie” to test how much it could raise prices in a way that competitors would follow, according to redacted portions of the Federal Trade Commission’s monopoly lawsuit against the company. From a report: The algorithm helped Amazon improve its profit on items across shopping categories, and because of the power the company has in e-commerce, led competitors to raise their prices and charge customers more, according to people familiar with the allegations in the complaint. In instances where competitors didn’t raise their prices to Amazon’s level, the algorithm — which is no longer in use — automatically returned the item to its normal price point.

The company also used Nessie on what employees saw as a promotional spiral, where Amazon would match a discounted price from a competitor, such as Target.com, and other competitors would follow, lowering their prices. When Target ended its sale, Amazon and the other competitors would remain locked at the low price because they were still matching each other, according to former employees who worked on the algorithm and pricing team. The algorithm helped Amazon recoup money and improve margins. The FTC’s lawsuit redacted an estimate of how much it alleges the practice “extracted from American households,” and it also says it helped the company generate a redacted amount of “excess profit.” Amazon made more than $1 billion in revenue through use of the algorithm, according to a person familiar with the matter. Amazon stopped using the algorithm in 2019, some of the people said. It wasn’t clear why the company stopped using it.

Source: Amazon Used Secret ‘Project Nessie’ Algorithm To Raise Prices – Slashdot