Lithium Ion Batteries a Growing Source of PFAS Pollution, Study Finds

“Nature recently published an open-access article (not paywalled) that studies the lifecycle of lithium-ion batteries once they are manufactured,” writes Slashdot reader NoWayNoShapeNoForm. “The study is a ‘cradle-to-grave’ look at these batteries and certain chemicals that they contain. The University researchers that authored the study found that the electrolytes and polymers inside lithium-ion batteries contain a class of PFAS known as bis-FASI chemicals. PFAS chemicals are internationally recognized pollutants, yet they are found in consumer and industrial processes, such as non-stick coatings, surfactants, and film-forming foams. PFAS chemicals have been found in windmill coatings, semiconductors, solar collectors, and photovoltaic cells.” Phys.org reports: Texas Tech University’s Jennifer Guelfo was part of a research team that found the use of a novel sub-class of per- and polyfluoroalkyl (PFAS) in lithium ion batteries is a growing source of pollution in air and water. Testing by the research team further found these PFAS, called bis-perfluoroalkyl sulfonimides (bis-FASIs), demonstrate environmental persistence and ecotoxicity comparable to older notorious compounds like perfluorooctanoic acid (PFOA). The researchers sampled air, water, snow, soil and sediment near manufacturing plants in Minnesota, Kentucky, Belgium and France. The bis-FASI concentrations in these samples were commonly at very high levels. Data also suggested air emissions of bis-FASIs may facilitate long-range transport, meaning areas far from manufacturing sites may be affected as well. Analysis of several municipal landfills in the southeastern U.S. indicated these compounds can also enter the environment through disposal of products, including lithium ion batteries.

Toxicity testing demonstrated concentrations of bis-FASIs similar to those found at the sampling sites can change behavior and fundamental energy metabolic processes of aquatic organisms. Bis-FASI toxicity has not yet been studied in humans, though other, more well-studied PFAS are linked to cancer, infertility and other serious health harms. Treatability testing showed bis-FASIs did not break down during oxidation, which has also been observed for other PFAS. However, data showed concentrations of bis-FASIs in water could be reduced using granular activated carbon and ion exchange, methods already used to remove PFAS from drinking water.
“Our results reveal a dilemma associated with manufacturing, disposal, and recycling of clean energy infrastructure,” said Guelfo, an associate professor of environmental engineering in the Edward E. Whitacre Jr. College of Engineering. “Slashing carbon dioxide emissions with innovations like electric cars is critical, but it shouldn’t come with the side effect of increasing PFAS pollution. We need to facilitate technologies, manufacturing controls and recycling solutions that can fight the climate crisis without releasing highly recalcitrant pollutants.”

source: Lithium Ion Batteries a Growing Source of PFAS Pollution, Study Finds

Inputs, Outputs, and Fair Uses: Unpacking Responses to Journalists’ Copyright Lawsuits

The complaints against OpenAI and Microsoft in New York Times Company v. Microsoft Corporation and Daily News, LP v. Microsoft Corporation include multiple theories––for instance, vicarious copyright infringement, contributory copyright infringement, and improper removal of copyright information. Those theories, however, are ancillary to both complaints’ primary cause of action: direct copyright infringement. While the defendants’ motions to dismiss focus primarily on jettisoning the ancillary claims and acknowledge that “development of record evidence” is necessary for resolving the direct infringement claims, they nonetheless offer insight on how the direct infringement fight might unfurl.

Direct Infringement Via Inputs and Outputs: The Daily News plaintiffs claim that by “building training datasets containing” their copyrighted works without permission, the defendants directly infringe the plaintiffs’ copyrights. Inputting copyrighted material to train Gen AI tools, they aver, constitutes direct infringement. Regarding outputs, the Daily News plaintiffs assert that “by disseminating generative output containing copies and derivatives of the” plaintiffs’ content, the defendants’ tools also infringe the plaintiffs’ copyrights. The Daily News’s input (illicit training) and output (disseminating copies) allegations track earlier contentions of The New York Times Company.

Fair Use Inputs and “Fringe” Outputs: OpenAI’s June arguments in Daily News frame “the core issue”––one OpenAI says “is for a later stage of the litigation” because discovery must first generate a factual record––facing New York City-based federal judge Sidney Stein as “whether using copyrighted content to train a generative AI model is fair use under copyright law.” Fair use, a defense to copyright infringement, involves analyzing four statutory factors: 1) the purpose and character of the allegedly infringing use; 2) the nature of copyrighted work allegedly infringed upon; 3) the amount of the copyrighted work infringed upon and whether the amount, even if small, nonetheless goes to the heart of the work; and 4) whether the infringing use will harm the market value of (or serve as a market substitute for) the original copyrighted work.

So, how might ingesting copyrighted journalistic content––the training or input aspect of the alleged infringement––be a protected fair use? Microsoft argues in Daily News that its “and OpenAI’s tools [don’t] exploit the protected expression in the Plaintiffs’ digital content.” (emphasis added). That’s a key point because copyright law does not protect things like facts, “titles, names, short phrases, and slogans.” OpenAI asserts, in response to The New York Times Company’s lawsuit, that “no one . . . gets to monopolize facts or the rules of language.” Learning semantic rules and patterns of “language, grammar, and syntax”––predicting which words are statistically most likely to follow others––is, at bottom, the purpose of the fair use to which OpenAI and Microsoft say they’re putting newspaper articles. They’re ostensibly just leveraging copyrighted articles “internally” (emphasis in original) to identify and learn language patterns, not to reproduce the articles in which those words appear.

More fundamentally, OpenAI and Microsoft aren’t attempting to disseminate copies of what copyright law is intended to incentivize and protect––“original works of authorship” and “writings.” They aren’t, the defendants claim, trying to unfairly produce market substitutes for actual newspaper articles.

How, then, do they counter the newspapers’ output infringement allegations that the defendants’ tools sometimes produce verbatim versions of the newspapers’ copyrighted articles? OpenAI contends such regurgitative outcomes “depend on an elaborate effort [by the defendants] to coax such outputs from OpenAI’s products, in a way that violates the operative OpenAI terms of service and that no normal user would ever even attempt.” Regurgitations otherwise are “rare” and “unintended,” the company adds. Barring settlements, courts will examine the input and output infringement battles in the coming months and years.

Source: Inputs, Outputs, and Fair Uses: Unpacking Responses to Journalists’ Copyright Lawsuits | American Enterprise Institute – AEI

Sharing material used to be the norm for newspapers, and should be for LLMs

Even though parents insist that it is good and right to share things, the copyright world has succeeded in establishing the contrary as the norm. Now, sharing is deemed a bad, possibly illegal thing. But it was not always thus, as a fascinating speech by Ryan Cordell, Associate Professor in the School of Information Sciences and Department of English at the University of Illinois Urbana-Champaign, underlines. In the US in the nineteenth century, newspaper material was explicitly not protected by copyright, and was routinely exchanged between titles:

Nineteenth-century editors’ attitude toward text reuse is exemplified in a selection that circulated in the last decade of the century, though often abbreviated from the version I cite here, which insists that “an editor’s selections from his contemporaries” are “quite often the best test of his editorial ability, and that the function of his scissors are not merely to fill up vacant spaces, but to reproduce the brightest and best thoughts…from all sources at the editor’s command.” While noting that sloppy or lazy selection will produce “a stupid issue,” this piece claims that just as often “the editor opens his exchanges, and finds a feast for eyes, heart and soul…that his space is inadequate to contain.” This piece ends by insisting “a newspaper’s real value is not the amount of original matter it contains, but the average quality of all the matter appearing in its columns whether original or selected.”

Material was not only copied verbatim, but modified and built upon in the process. As a result of this constant exchange, alteration and enhancement, newspaper readers in the US enjoyed a rich ecosystem of information, and a large number of titles flourished, since the cost of producing suitable material for each of them was shared and thus reduced.

That historical fact in itself is interesting. It’s also important at a time when newspaper publishers are some of the most aggressive in demanding ever stronger – and ever more disproportionate – copyright protection for their products, for example through “link taxes”. But Cordell’s speech is not simply backward looking. It goes on to make another fascinating observation, this time about large language models (LLMs):

We can see in the nineteenth-century newspaper exchanges a massive system for recycling and remediating culture. I do not wish to slip into hyperbole or anachronism, and will not claim historical newspapers as a precise analogue for twenty-first century AI or large language models. But it is striking how often metaphors drawn from earlier media appear in our attempts to understand and explain these new technologies.

The whole speech is well worth reading as a useful reminder that the current copyright panic over LLMs is in part because we have forgotten that sharing material and helping others to build on it was once the norm. And despite blinkered and selfish views to the contrary, it is still the right thing to do, just as parents continue to tell their children.

Source: Sharing material used to be the norm for newspapers, and should be for LLMs – Walled Culture

Hacking Airline WiFi The Hard Way

[…]

[Robert Heaton] had an interesting idea. Could the limited free use of the network be coopted to access the general internet? Turns out, the answer is yes.

Admittedly, it is a terrible connection. Here’s how it works. The airline lets you get to your frequent flier account. When there, you can change information such as your name. A machine on the ground can also see that change and make changes, too. That’s all it takes.

It works like a drop box. You take TCP traffic, encode it as fake information for the account and enter it. You then watch for the response via the same channel and reconstitute the TCP traffic from the remote side. Now the network is at your fingertips.

There’s more to it, but you can read about it in the post. It is slow, unreliable, and you definitely shouldn’t be doing it. But from the point of view of a clever hack, we loved it. In fact, [Robert] didn’t do it either. He proved it would work but did all the development using GitHub gist as the drop box. While we appreciate the hack, we also appreciate the ethical behavior!

Some airlines allow free messaging, which is another way to tunnel traffic. If you can connect to something, you can probably find a way to use it as a tunnel.

Source: Hacking Airline WiFi The Hard Way | Hackaday

Report finds most subscription services manipulate customers with ‘dark patterns’

Most subscription sites use “dark patterns” to influence customer behavior around subscriptions and personal data, according to a pair of new reports from global consumer protection groups. Dark patterns are “practices commonly found in online user interfaces [that] steer, deceive, coerce or manipulate consumers into making choices that often are not in their best interests.” The international research efforts were conducted by the International Consumer Protection and Enforcement Network (ICPEN) and the Global Privacy Enforcement Network (GPEN).

The ICPEN conducted the review of 642 websites and mobile apps with a subscription component. The assessment revealed one dark pattern in use at almost 76 percent of the platforms, and multiple dark patterns at play in almost 68 percent of them. One of the most common dark patterns discovered was sneaking, where a company makes potentially negative information difficult to find. ICPEN said 81 percent of the platforms with automatic subscription renewal kept the ability for a buyer to turn off auto-renewal out of the purchase flow. Other dark patterns for subscription services included interface interference, where desirable actions are easier to perform, and forced action, where customers have to provide information to access a particular function.

The companion report from GPEN examined dark patterns that could encourage users to compromise their privacy. In this review, nearly all of the more than 1,000 websites and apps surveyed used a deceptive design practice. More than 89 percent of them used complex and confusing language in their privacy policies. Interface interference was another key offender here, with 57 percent of the platforms making the least protective privacy option the easiest to choose and 42 percent using emotionally charged language that could influence users.

Even the most savvy of us can be influenced by these subtle cues to make suboptimal decisions. Those decisions might be innocuous ones, like forgetting that you’ve set a service to auto-renew, or they might put you at risk by encouraging you to reveal more personal information than needed. The reports didn’t specify whether the dark patterns were used in illicit or illegal ways, only that they were present. The dual release is a stark reminder that digital literacy is an essential skill.

Source: Report finds most subscription services manipulate customers with ‘dark patterns’

The US Supreme Court’s Contempt for Facts Is a Betrayal of Justice

When the Supreme Court’s Ohio v. EPA decision blocked Environmental Protection Agency limits on Midwestern states polluting their downwind neighbors, a sad but telling coda came in Justice Neil Gorsuch’s opinion. In five instances, it confused nitrogen oxide, a pollutant that contributes to ozone formation, with nitrous oxide, better known as laughing gas.

You can’t make this stuff up. This repeated mistake in the 5-4 decision exemplifies a high court not just indifferent to facts but contemptuous of them.

Public trust in the Supreme Court, already at a historic low, is now understandably plunging. In the last four years, a reliably Republican majority on the high court, led by Chief Justice John Roberts, has embarked on a remarkable spree against history and reality itself, ignoring or eliding facts in decisions involving school prayer, public health, homophobia, race, climate change, abortion and clean water, not to mention the laughing gas case.

The crescendo to this assault on expertise landed in June, when the majority’s Chevron decision arrogated to the courts regulatory calls that have been made by civil servant scientists, physicians and lawyers for the last 40 years. (With stunning understatement, the Associated Press called it “a far-reaching and potentially lucrative victory to business interests.” No kidding.) The decision enthrones the high court—an unelected majority—as a group of technically incompetent, in some cases corrupt, politicos in robes with power over matters that hinge on vital facts about pollution, medicine, employment and much else. These matters govern our lives.

The 2022 Kennedy v. Bremerton School District school prayer decision hinged on a fable of a football coach offering “a quiet personal prayer,” in the words of the opinion. In reality, this coach was holding overt post-game prayer meetings on the 50-yard line, ones that an atheist player felt compelled to attend to keep off the bench. Last year’s 303 Creative v. Elenis decision, allowing a Web designer to discriminate against gay people, revolved entirely on a supposed request for a gay wedding website that never existed that (supposedly) came from a straight man who never made the request. Again, you can’t make this stuff up. Unless you are on the Supreme Court. Then it becomes law.

Summing up the Court’s term on July 1, the legal writer Chris Geidner called attention to a more profound “important and disturbing reality” of the current majority’s relationship to facts. “When it needs to decide a matter for the right, it can and does accept questionable, if not false, claims as facts. If the result would benefit the left, however, there are virtually never enough facts to reach a decision.”

The “laughing gas” decision illustrates this nicely: EPA had asked 23 states to submit a state-based plan to reduce their downwind pollution. Of those, 21 proposed to do nothing to limit their nitrogen (not nitrous) oxide emissions. Two others didn’t even respond to that extent. Instead of telling the states to cut their pollution as required by law, the Court’s majority invented a new theoretical responsibility for EPA—to account for future court cases keeping a state out of its Clean Air Act purview—and sent the case back to an appeals court.

Source: The Supreme Court’s Contempt for Facts Is a Betrayal of Justice | Scientific American

And that’s not even talking about giving sitting presidents immunity from criminal behaviour either!

Scientific articles using ‘sneaked references’ to inflate their citation numbers

[…] A recent Journal of the Association for Information Science and Technology article by our team of academic sleuths – which includes information scientists, a computer scientist and a mathematician – has revealed an insidious method to artificially inflate citation counts through metadata manipulations: sneaked references.

Hidden manipulation

People are becoming more aware of scientific publications and how they work, including their potential flaws. Just last year more than 10,000 scientific articles were retracted. The issues around citation gaming and the harm it causes the scientific community, including damaging its credibility, are well documented.

[…]

we found through a chance encounter that some unscrupulous actors have added extra references, invisible in the text but present in the articles’ metadata, when they submitted the articles to scientific databases. The result? Citation counts for certain researchers or journals have skyrocketed, even though these references were not cited by the authors in their articles.

Chance discovery

The investigation began when Guillaume Cabanac, a professor at the University of Toulouse, wrote a post on PubPeer, a website dedicated to postpublication peer review, in which scientists discuss and analyze publications. In the post, he detailed how he had noticed an inconsistency: a Hindawi journal article that he suspected was fraudulent because it contained awkward phrases had far more citations than downloads, which is very unusual.

The post caught the attention of several sleuths who are now the authors of the JASIST article. We used a scientific search engine to look for articles citing the initial article. Google Scholar found none, but Crossref and Dimensions did find references. The difference? Google Scholar is likely to mostly rely on the article’s main text to extract the references appearing in the bibliography section, whereas Crossref and Dimensions use metadata provided by publishers.

[…]

In the journals published by Technoscience Academy, at least 9% of recorded references were “sneaked references.” These additional references were only in the metadata, distorting citation counts and giving certain authors an unfair advantage. Some legitimate references were also lost, meaning they were not present in the metadata.

In addition, when analyzing the sneaked references, we found that they highly benefited some researchers. For example, a single researcher who was associated with Technoscience Academy benefited from more than 3,000 additional illegitimate citations. Some journals from the same publisher benefited from a couple hundred additional sneaked citations.

[…]

Why is this discovery important? Citation counts heavily influence research funding, academic promotions and institutional rankings. Manipulating citations can lead to unjust decisions based on false data. More worryingly, this discovery raises questions about the integrity of scientific impact measurement systems, a concern that has been highlighted by researchers for years. These systems can be manipulated to foster unhealthy competition among researchers, tempting them to take shortcuts to publish faster or achieve more citations.

[…]

Source: When scientific citations go rogue: Uncovering ‘sneaked references’

Speed limiters arrive for all new cars in the European Union

It was a big week for road safety campaigners in the European Union as Intelligent Speed Assistance (ISA) technology became mandatory on all new cars.

The rules came into effect on July 7 and follow a 2019 decision by the European Commission to make ISA obligatory on all new models and types of vehicles introduced from July 2022. Two years on, and the tech must be in all new cars.

European legislators reckon that the rules will make for safer roads. However, they will also add to the ever-increasing amount of technology rolling around the continent’s highways. While EU law has no legal force in the UK, it’s hard to imagine many manufacturers making an exemption for Britain.

So how does it work? In the first instance, the speed limit on a given road can be detected by using data from a Global Navigation Satellite System (GNSS) – such as Global Positioning System (GPS) – and a digital map to come up with a speed limit. This might be combined with physical sign recognition.

If the driver is being a little too keen, the ISA system must notify them that the limit has been exceeded but, according to the European Road Safety Charter “not to restrict his/her possibility to act in any moment during driving.”

“The driver is always in control and can easily override the ISA system.”

There are four options available to manufacturers according to the regulations. The first two, a cascaded acoustic or vibrating warning, don’t intervene, while the latter two, haptic feedback through the acceleration pedal and a speed limiter, will. The European Commission noted, “Even in the case of speed control function, where the car speed will be automatically gently reduced, the system can be smoothly overridden by the driver by pressing the accelerator pedal a little bit deeper.”

The RAC road safety spokesperson Rod Dennis said: “While it’s not currently mandated that cars sold in the UK have to be fitted with Intelligent Speed Assistance (ISA) systems, we’d be surprised if manufacturers deliberately excluded the feature from those they sell in the UK as it would add unnecessary cost to production.”

This writer has driven a car equipped with the technology, and while it would be unfair to name and shame particular manufacturers, things are a little hit-and-miss. Road signs are not always interpreted correctly, and maps are not always up to date, meaning the car is occasionally convinced that the speed limit differs from reality, with various beeps and vibrations to demonstrate its belief.

Dennis cautioned, “Anyone getting a new vehicle would be well advised to familiarise themselves with ISA and how it works,” and we would have to agree.

While it is important to understand that the technology is still a driver aid and can easily be overridden, it is not hard to detect the direction of travel.

Source: Speed limiters arrive for all new cars in the European Union • The Register