Google turned ANC earbuds into heart rate sensor

Google today detailed its research into audioplethysmography (APG) that adds heart rate sensing capabilities to active noise canceling (ANC) headphones and earbuds “with a simple software upgrade.”

Google says the “ear canal [is] an ideal location for health sensing” given that the deep ear artery “forms an intricate network of smaller vessels that extensively permeate the auditory canal.”

This audioplethysmography approach works by “sending a low intensity ultrasound probing signal through an ANC headphone’s speakers.”

This signal triggers echoes, which are received via on-board feedback microphones. We observe that the tiny ear canal skin displacement and heartbeat vibrations modulate these ultrasound echoes.

A model that Google created works to process that feedback into a heart rate reading, as well as heart rate variability (HRV) measurement. This technique works even with music playing and “bad earbuds seals.” However, it was impacted by body motion, and Google countered with a multi-tone approach that serves as a calibration tool to “find the best frequency that measures heart rate, and use only the best frequency to get high-quality pulse waveform.”

Google performed two sets of studies with 153 people that found APG “achieves consistently accurate heart rate (3.21% median error across participants in all activity scenarios) and heart rate variability (2.70% median error in inter-beat interval) measurements.”

Compared to existing HR sensors, it’s not impacted by skin tones. Ear canal size and “sub-optimal seal conditions” also do not impact accuracy. Google believes this is a better approach than putting traditional photoplethysmograms (PPG) and electrocardiograms (ECG) sensors, as well as a microcontroller, in headphones/earbuds:

…this sensor mounting paradigm inevitably adds cost, weight, power consumption, acoustic design complexity, and form factor challenges to hearables, constituting a strong barrier to its wide adoption.

Google closes on:

APG transforms any TWS ANC headphones into smart sensing headphones with a simple software upgrade, and works robustly across various user activities. The sensing carrier signal is completely inaudible and not impacted by music playing. More importantly, APG represents new knowledge in biomedical and mobile research and unlocks new possibilities for low-cost health sensing.

“APG is the result of collaboration across Google Health, product, UX and legal teams,” so this coming to Pixel Buds is far from guaranteed at this point.

Source: Google turned ANC earbuds into heart rate sensor

AI Risks – doomsayers, warriors, reformers

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts, and the public declarations issued about AI are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.

The result is a cacophony of coded language, contradictory views, and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy, and even our daily lives.

These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays, or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view AI But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, you’ll realize this isn’t really a debate only about AI. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because they’re already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of AI to stay true to the humanity of our values.

One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different AI camps tend not to use the same words to describe their positions. One faction describes the dangers posed by AI through the framework of safety, another through ethics or integrity, yet another through security, and others through economics. By decoding who is speaking and how AI is being described, we can explore where these groups differ and what drives their views.

The Doomsayers

The loudest perspective is a frightening, dystopian vision in which AI poses an existential risk to humankind, capable of wiping out all life on Earth. AI, in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. AI could destroy humanity or pose a risk on par with nukes. If we’re not careful, it could kill everyone or enslave humanity. It’s likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earth’s resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.

These are the AI safety people, and their ranks include the “Godfathers of AI,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other AI tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like AI enslavement.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. It’s widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears “wishful worries”—that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”

More practically, many of the researchers in this group are proceeding full steam ahead in developing AI, demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming—the dangers will not be sudden and we will have time to change course. While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of AI and, most important, not allow them to strategically distract from more immediate concerns. Let’s not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.

The Reformers

While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanity’s worst instincts are encoded into and enforced by machines. The doomsayers think AI enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these AI ethics concerns—like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury, and Cathy O’Neil—have been raising the alarm on inequities coded into AI for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women, and people who identify as LGBTQ. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable AI, she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform AI in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside—or even above—their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the AI revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Google’s AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.

While doomsayers and reformers share the concern that AI must align with human interests, reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by AI misinformation, surveillance, and inequity. Integrity experts call for the development of responsible AI, for civic education to ensure AI literacy and for keeping humans front and center in AI systems.

This group’s concerns are well documented and urgent—and far older than modern AI technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that AI might kill us in the future should still demand that it not profile and exploit us in the present.

The Warriors

Other groups of prognosticators cast the rise of AI through the language of competitiveness and national security. One version has a post-9/11 ring to it—a world where terrorists, criminals, and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an AI arms race with China and its surveillance-rich society.

Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.

OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant AI companies, are pushing for AI regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading AI companies while restricting competition from start-ups. In the lobbying battles over Europe’s trailblazing AI regulatory framework, US megacompanies pleaded to exempt their general-purpose AI from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”

Any technology critical to national defense usually has an easier time avoiding oversight, regulation, and limitations on profit. Any readiness gap in our military demands urgent budget increases and funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Google’s former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in US national security concerns.

The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-twentieth century. AI research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.


As the science-fiction author Ted Chiang has said, fears about the existential risks of AI are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-up’s business plan. Cosma Shalizi and Henry Farrell further argue that “we’ve lived among shoggoths for centuries, tending to them as though they were our masters” as monopolistic platforms devour and exploit the totality of humanity’s labor and ingenuity for their own interests. This dread applies as much to our future with AI as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with AI, China, and the fights picked among robber barons.

By analogy to the healthcare sector, we need an AI public option to truly keep AI companies in check. A publicly directed AI development project would serve to counterbalance for-profit corporate AI and help ensure an even playing field for access to the twenty-first century’s key technology while offering a platform for the ethical development and use of AI.

Also, we should embrace the humanity behind AI. We can hold founders and corporations accountable by mandating greater AI transparency in the development stage, in addition to applying legal standards for actions associated with AI. Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness AI to accumulate much more or pursue extreme ideologies, let’s think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Source: AI Risks – Schneier on Security

Air Canada Sues Website That Helps People Book More Flights simultaneously Calls own website team incompetent beyond belief

I am so frequently confused by companies that sue other companies for making their own sites and services more useful. It happens quite often. And quite often, the lawsuits are questionable CFAA claims against websites that scrape data to provide a better consumer experience, but one that still ultimately benefits the originating site.

Over the last few years various airlines have really been leading the way on this, with Southwest being particularly aggressive in suing companies that help people find Southwest flights to purchase. Unfortunately, many of these lawsuits are succeeding, to the point that a court has literally said that a travel company can’t tell others how much Southwest flights cost.

But the latest lawsuit of this nature doesn’t involve Southwest, and is quite possibly the dumbest one. Air Canada has sued the site Seats.aero that helps users figure out the best flights for their frequent flyer miles. Seats.aero is a small operation run by the company with the best name ever: Localhost, meaning that the lawsuit is technically “Air Canada v. Localhost” which sounds almost as dumb as this lawsuit is.

The Air Canada Group brings this action because Mr. Ian Carroll—through Defendant Localhost LLC—created a for-profit website and computer application (or “app”)— both called Seats.aero—that use substantial amounts of data unlawfully scraped from the Air Canada Group’s website and computer systems. In direct violation of the Air Canada Group’s web terms and conditions, Carroll uses automated digital robots (or “bots”) to continuously search for and harvest data from the Air Canada Group’s website and database. His intrusions are frequent and rapacious, causing multiple levels of harm, e.g., placing an immense strain on the Air Canada Group’s computer infrastructure, impairing the integrity and availability of the Air Canada Group’s data, soiling the customer experience with the Air Canada Group, interfering with the Air Canada Group’s business relations with its partners and customers, and diverting the Air Canada Group’s resources to repair the damage. Making matters worse, Carroll uses the Air Canada Group’s federally registered trademarks and logo to mislead people into believing that his site, app, and activities are connected with and/or approved by the real Air Canada Group and lending an air of legitimacy to his site and app. The Air Canada Group has tried to stop Carroll’s activities via a number of technological blocking measures. But each time, he employs subterfuge to fraudulently access and take the data—all the while boasting about his exploits and circumvention online.

Almost nothing in this makes any sense. Having third parties scrape sites for data about prices is… how the internet works. Whining about it is stupid beyond belief. And here, it’s doubly stupid, because anyone who finds a flight via seats.aero is then sent to Air Canada’s own website to book that flight. Air Canada is making money because Carroll’s company is helping people find Air Canada flights they can take.

Why are they mad?

Air Canada’s lawyers also seem technically incompetent. I mean, what the fuck is this?

Through screen scraping, Carroll extracts all of the data displayed on the website, including the text and images.

Carroll also employs the more intrusive API scraping to further feed Defendant’s website.

If the “API scraping” is “more intrusive” than screen scraping, you’re doing your APIs wrong. Is Air Canada saying that its tech team is so incompetent that its API puts more load on the site than scraping? Because, if so, Air Canada should fire its tech team. The whole point of an API is to make it easier for those accessing data from your website without needing to do the more cumbersome process of scraping.

And, yes, this lawsuit really calls into question Air Canada’s tech team and their ability to run a modern website. If your website can’t handle having its flights and prices scraped a few times every day, then you shouldn’t have a website. Get some modern technology, Air Canada:

Defendant’s avaricious data scraping generates frequent and myriad requests to the Air Canada Group’s database—far in excess of what the Air Canada Group’s infrastructure was designed to handle. Its scraping collects a large volume of data, including flight data within a wide date range and across extensive flight origins and destinations—multiple times per day.

Maybe… invest in better infrastructure like basically every other website that can handle some basic scraping? Or, set up your API so it doesn’t fall over when used for normal API things? Because this is embarrassing:

At times, Defendant’s voluminous requests have placed such immense burdens on the Air Canada Group’s infrastructure that it has caused “brownouts.” During a brownout, a website is unresponsive for a period of time because the capacity of requests exceeds the capacity the website was designed to accommodate. During brownouts caused by Defendant’s data scraping, legitimate customers are unable to use or the Air Canada + Aeroplan mobile app, including to search for available rewards, redeem Aeroplan points for the rewards, search for and view reward travel availability, book reward flights, contact Aeroplan customer support, and/or obtain service through the Aeroplan contact center due to the high volume of calls during brownouts.

Air Canada’s lawyers also seem wholly unfamiliar with the concept of nominative fair use for trademarks. If you’re displaying someone’s trademarks for the sake of accurately talking about them, there’s no likelihood of confusion and no concern about the source of the information. Air Canada claiming that this is trademark infringement is ridiculous:

I guarantee that no one using Seats.aero thinks that they’re on Air Canada’s website.

The whole thing is so stupid that it makes me never want to fly Air Canada again. I don’t trust an airline that can’t set up its website/API to handle someone making its flights more attractive to buyers.

But, of course, in these crazy times with the way the CFAA has been interpreted, there’s a decent chance Air Canada could win.

For its part, Carroll says that he and his lawyers have reached out to Air Canada “repeatedly” to try to work with them on how they “retrieve availability information,” and that “Air Canada has ignored these offers.” He also notes that tons of other websites are scraping the very same information, and he has no idea why he’s been singled out. He further notes that he’s always been open to adjusting the frequency of searches and working with the airlines to make sure that his activities don’t burden the website.

But, really, the whole thing is stupid. The only thing that Carroll’s website does is help people buy more flights. It points people to the Air Canada site to buy tickets. It makes people want to fly more on Air Canada.

Why would Air Canada want to stop that other than that it can’t admit that it’s website operations should all be replaced by a more competent team?

Source: Air Canada Would Rather Sue A Website That Helps People Book More Flights Than Hire Competent Web Engineers | Techdirt

New French AI Copyright Law Would Effectively Tax AI Companies, Enrich French taxman

This blog has written a number of times about the reaction of creators to generative AI. Legal academic and copyright expert Andres Guadamuz has spotted what may be the first attempt to draw up a new law to regulate generative AI. It comes from French politicians, who have developed something of a habit of bringing in new laws attempting to control digital technology that they rarely understand but definitely dislike.

There are only four articles in the text of the proposal, which are intended to be added as amendments to existing French laws. Despite being short, the proposal contains some impressively bad ideas. The first of these is found in Article 2, which, as Guadamuz summarises, “assigns ownership of the [AI-generated] work (now protected by copyright) to the authors or assignees of the works that enabled the creation of the said artificial work.” Here’s the huge problem with that idea:

How can one determine the author of the works that facilitated the conception of the AI-generated piece? While it might seem straightforward if AI works are viewed as collages or summaries of existing copyrighted works, this is far from the reality. As of now, I’m unaware of any method to extract specific text from ChatGPT or an image from Midjourney and enumerate all the works that contributed to its creation. That’s not how these models operate.

Since there is no way to find out exactly who the creators are whose work helped generate a new piece of AI material using aggregated statistics, Guadamuz suggests that the French lawmakers might want creators to be paid according to their contribution to the training material that went into creating the generative AI system itself. Using his own writings as an example, he calculates what fraction of any given payout he would receive with this approach. For ChatGPT’s output, Guadamuz estimates he might receive 0.00001% of any payout that was made. To give an example, even if the licensing fee for a some hugely popular work generated using AI were €1,000,000, Guadamuz would only receive 10 cents. Most real-life payouts to creators would be vanishingly small.

Article 3 of the French proposal builds on this ridiculous approach by requiring the names of all the creators who contributed to some AI-generated output to be included in that work. But as Guadamuz has already noted, there’s no way to find out exactly whose works have contributed to an output, leaving the only option to include the names of every single creator whose work is present in the training set – potentially millions of names.

Interestingly, Article 4 seems to recognize the payment problem raised above, and offers a way to deal with it. Guadamuz explains:

As it will be not possible to find the author of an AI work (which remember, has copyright and therefore isn’t in the public domain), the law will place a tax on the company that operates the service. So it’s sort of in the public domain, but it’s taxed, and the tax will be paid by OpenAI, Google, Midjourney, StabilityAI, etc. But also by any open source operator and other AI providers (Huggingface, etc). And the tax will be used to fund the collective societies in France… so unless people are willing to join these societies from abroad, they will get nothing, and these bodies will reap the rewards.

In other words, the net effect of the French proposal seems to be to tax the emerging AI giants (mostly US companies) and pay the money to French collecting societies. Guadumuz goes so far as to say: “in my view, this is the real intention of the legislation”. Anyone who thinks this is a good solution might want to read Chapter 7 of Walled Culture the book (free digital versions available), which quotes from a report revealing “a long history of corruption, mismanagement, confiscation of funds, and lack of transparency [by collecting societies] that has deprived artists of the revenues they earned”. Trying to fit generative AI into the straitjacket of an outdated copyright system designed for books is clearly unwise; using it as a pretext for funneling yet more money away from creators and towards collecting societies is just ridiculous.

Source: New French AI Copyright Law Would Effectively Tax AI Companies, Enrich Collection Societies | Techdirt

Motorola’s concept slap bracelet smartphone looks convenient

Forget foldable phones, the next big trend could be gadgets that bend.

Lenovo, which is currently holding its ninth Tech World event in Austin, Texas, showed off its new collaboration with its subsidiary Motorola: a smartphone that can wrap around your wrist like a watch band.

It’s admittedly quite fascinating to see the tech in action. Lenovo calls its device the “Adaptive Display Concept”, which is comprised of a Full HD Plus resolution (2,228 x 1,080 pixels) pOLED screen that is able to “be bent and shaped into different” forms to meet the user’s needs. There’s no external hinge either as the prototype is a single-screen Android phone. The company explains bending it in half turns the 6.9-inch into one measuring 4.6 inches across. It can stand upright on the bent portion, in an arc, or wrap around a wrist as mentioned earlier.

Unfortunately, that’s all we know about the hardware itself. The Adaptive Display Concept did appear on stage at the Tech World 2023 where the presenter showed off its flexibility by placing it over her arm. Beyond that demonstration, though, both Lenovo and Motorola are keeping their lips sealed tight.

Source: Motorola’s concept ‘bracelet’ smartphone could be a clever final form for foldables | TechRadar

Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework

The Data Privacy Framework (DPF) presents new legal guidance to facilitate personal data sharing between US companies and their counterparts in the EU and the UK. This framework empowers individuals with greater control over their personal data and streamlines business operations by creating common rules around interoperable dataflows. Moreover, the DPF will help enable clear contract terms and business codes of conduct for corporations that collect, use, and transfer personal data across borders.

Any business that collects data related to people in the EU must comply with the EU’s General Data Protection Regulation (GDPR), which is the toughest privacy and security law across the globe. Thus, the DPF helps US corporations avoid potentially hefty fines and penalties by ensuring their data transfers align with GDPR regulations.

Data transfer procedures, which were historically time-consuming and riddled with legal complications, are now faster and more straightforward with the DPF, which allows for more transatlantic dataflows agreed on by US companies and their EU and UK counterparts. On July 10, 2023, the European Commission finalized an adequacy decision that assures the US offers data protection levels similar to the EU’s.

[…]

US companies can register with the DPF through the Department of Commerce DPF website. Companies that previously self-certified compliance with the EU-US Privacy Shield can transition to DPF by recertifying their adherence to DPF principles, including updating privacy policies to reflect any change in procedures and data subject rights that are crucial for this transition. Businesses should develop privacy policies that identify an independent recourse mechanism that can address data protection concerns. To qualify for the DPF the company must fall under the jurisdiction of either the Federal Trade Commission or the US Department of Transportation, though this reach may broaden in the future.

Source: Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework | American Enterprise Institute – AEI

The whole self-certification things seems leaky as a sieve to me… And once data has gone into the US intelligence services you can assume it will go everywhere and there will be no stopping it from the EU side.

Citrix urges “immediate” patching as exploit POC

Citrix has urged admins to “immediately” apply a fix for CVE-2023-4966, a critical information disclosure bug that affects NetScaler ADC and NetScaler Gateway, admitting it has been exploited.

Plus, there’s a proof-of-concept exploit, dubbed Citrix Bleed, now on GitHub. So if you are using an affected build, at this point assume you’ve been compromised, apply the update, and then kill all active sessions per Citrix’s advice from Monday.

The company’s first issued a patch for compromised devices on October 10, and last week Mandiant warned that criminals — most likely cyberspies — have been abusing this hole to hijack authentication sessions and steal corporate info since at least late August.

[…]

Also last week, Mandiant Consulting CTO Charles Carmakal warned that “organizations need to do more than just apply the patch — they should also terminate all active sessions. These authenticated sessions will persist after the update to mitigate CVE-2023-4966 has been deployed.”

Citrix, in the Monday blog, also echoed this mitigation advice and told customers to kill all active and persistent sessions using the following commands:

kill icaconnection -all

kill rdp connection -all

kill pcoipConnection -all

kill aaa session -all

clear lb persistentSessions

[…]

Source: Citrix urges “immediate” patching as exploit POC • The Register

Clearview Gets $10 Million UK Fine Reversed, Now Owes Slightly Less To Governments Around The World

Here’s how things went for the world’s most infamous purveyor of facial recognition tech when it came to its dealings with the United Kingdom. In a word: not great.

In addition to supplying its scraped data to known human rights abusers, Clearview was found to have supplied access to a multitude of UK and US entities. At that point (early 2020), it was also making its software available to a number of retailers, suggesting the tool its CEO claimed was instrumental in fighting serious crime (CSAM, terrorism) was just as great at fighting retail theft. For some reason, an anti-human-trafficking charity headed up by author J.K. Rowling was also on the customer list obtained by Buzzfeed.

Clearview’s relationship with the UK government soon soured. In December 2021, the UK government’s Information Commissioner’s Office (ICO) said the company had violated UK privacy laws with its non-consensual scraping of UK residents’ photos and data. That initial declaration from the ICO came with a $23 million fine attached, one that was reduced to a little less than $10 million ($9.4 million) roughly six months later, accompanied by demands Clearview immediately delete all UK resident data in its possession.

This fine was one of several the company managed to obtain from foreign governments. The Italian government — citing EU privacy law violations — levied a $21 million fine. The French government came to the same conclusions and the same penalty, adding another $21 million to Clearview’s European tab.

The facial recognition tech company never bothered to proclaim its innocence after being fined by the UK government. Instead, it simply stated the UK government had no power to enforce this fine because Clearview was a United States company with no physical presence in the United Kingdom.

In addition to engaging in reputational rehab on the UK front, Clearview went to court to challenge the fine levied by the UK government. And it appears to have won this round for the moment, reducing its accounts payable ledger by about $10 million, as Natasha Lomas reports for TechCrunch.

[I]n a ruling issued yesterday its legal challenge to the ICO prevailed on jurisdiction grounds after the tribunal ruled the company’s activities fall outside the jurisdiction of U.K. data protection law owing to an exemption related to foreign law enforcement.

Which is pretty much the argument Clearview made months ago, albeit less elegantly after it was first informed of the fine. The base argument is that Clearview is a US entity providing services to foreign entities and that it’s up to its foreign customers to comply with local laws, rather than Clearview itself.

That argument worked. And it worked because it appears the ICO chose the wrong law to wield against Clearview. The UK’s GDPR does not protect UK residents from actions taken by “competent authorities for law enforcement purposes.” (lol at that entire phrase.) Government customers of Clearview are only subject to the adopted parts of the EU’s Data Protection Act post-Brexit, which means the company’s (alleged) pivot to the public sector puts both its actions — and the actions of its UK law enforcement clients — outside of the reach of the GDPR.

Per the ruling, Clearview argued it’s a foreign company providing its service to “foreign clients, using foreign IP addresses, and in support of the public interest activities of foreign governments and government agencies, in particular in relation to their national security and criminal law enforcement functions.”

That’s enough to get Clearview off the hook. While the GDPR and EU privacy laws have extraterritorial provisions, they also make exceptions for law enforcement and national security interests. GDPR has more exceptions, which made it that much easier for Clearview to walk away from this penalty by claiming it only sold to entities subject to this exception.

Whether or not that’s actually true has yet to be determined. And it might have made more sense for ICO to prosecute this under the parts of EU law the UK government decided to adopt after deciding it no longer wanted to be part of this particular union.

Even if the charges had stuck, it’s unlikely Clearview would ever have paid the fine. According to its CEO and spokespeople, Clearview owes nothing to anyone. Whatever anyone posts publicly is fair game. And if the company wants to hoover up everything on the web that isn’t nailed down, well, that’s a problem for other people to be subjected to, possibly at gunpoint. Until someone can actually make something stick, all they’ve got is bills they can’t collect and a collective GFY from one of the least ethical companies to ever get into the facial recognition business.

Source: Clearview Gets $10 Million UK Fine Reversed, Now Owes Slightly Less To Governments Around The World | Techdirt

Google Decides To Pull Up The Ladder On The Open Internet, Pushes For Unconstitutional Regulatory Proposals

It’s pretty much the way of the world: beyond the basic enshittification story that has been so well told over the past year or so about how companies get worse and worse as they get more and more powerful, there’s also the well known concept of successful innovative companies “pulling up the ladder” behind them, using the regulatory process to make it impossible for other companies to follow their own path to success. We’ve talked about this in the sense of political entrepreneurship, which is when the main entrepreneurial effort is not to innovate in newer and better products for customers, but rather using the political system for personal gain and to prevent competitors from havng the same opportunities.

It happens all too frequently. And it’s been happening lately with the big internet companies, which relied on the open internet to become successful, but under massive pressure from regulators (and the media), keep shooting the open internet in the back, each time they can present themselves as “supportive” of some dumb regulatory regime. Facebook did it six years ago by supporting FOSTA wholeheartedly, which was the key tide shift that made the law viable in Congress.

And, now, it appears that Google is going down that same path. There have been hints here and there, such as when it mostly gave up the fight on net neutrality six years ago. However, Google had still appeared to be active in various fights to protect an open internet.

But, last week, Google took a big step towards pulling up the open internet ladder behind it, which got almost no coverage (and what coverage it got was misleading). And, for the life of me, I don’t understand why it chose to do this now. It’s one of the dumbest policy moves I’ve seen Google make in ages, and seems like a complete unforced error.

Last Monday, Google announced “a policy framework to protect children and teens online,” which was echoed by subsidiary YouTube, which posted basically the same thing, talking about it’s “principled approach for children and teenagers.” Both of these pushed not just a “principled approach” for companies to take, but a legislative model (and I hear that they’re out pushing “model bills” across legislatures as well).

The “legislative” model is, effectively, California’s Age Appropriate Design Code. Yes, the very law that was just declared unconstitutional just a few weeks before Google basically threw its weight behind the approach. What’s funny is that many, many people have (incorrectly) believed that Google was some sort of legal mastermind behind the NetChoice lawsuits challenging California’s law and other similar laws, when the reality appears to be that Google knows full well that it can handle the requirements of the law, but smaller competitors cannot. Google likes the law. It wants more of them, apparently.

The model includes “age assurance” (which is effectively age verification, though everyone pretends it’s not), greater parental surveillance, and the compliance nightmare of “impact assessments” (we talked about this nonsense in relation to the California law). Again, for many companies this is a good idea. But just because something is a good idea for companies to do does not mean that it should be mandated by law.

But that’s exactly what Google is pushing for here, even as a law that more or less mimics its framework was just found to be unconstitutional. While cynical people will say that maybe Google is supporting these policies hoping that they will continue to be found unconstitutional, I see little evidence to support that. Instead, it really sounds like Google is fully onboard with these kinds of duty of care regulations that will harm smaller competitors, but which Google can handle just fine.

It’s pulling up the ladder behind it.

And yet, the press coverage of this focused on the fact that this was being presented as an “alternative” to a full on ban for kids under 18 to be on social media. The Verge framed this as “Google asks Congress not to ban teens from social media,” leaving out that it was Google asking Congress to basically make it impossible for any site other than the largest, richest companies to be able to allow teens on social media. Same thing with TechCrunch, which framed it as Google lobbying against age verification.

But… it’s not? It’s basically lobbying for age verification, just in the guise of “age assurance,” which is effectively “age verification, but if you’re a smaller company you can get it wrong some undefined amount of time, until someone sues you.” I mean, what’s here is not “lobbying against age verification,” it’s basically saying “here’s how to require age verification.”

A good understanding of user age can help online services offer age-appropriate experiences. That said, any method to determine the age of users across services comes with tradeoffs, such as intruding on privacy interests, requiring more data collection and use, or restricting adult users’ access to important information and services. Where required, age assurance – which can range from declaration to inference and verification – should be risk-based, preserving users’ access to information and services, and respecting their privacy. Where legislation mandates age assurance, it should do so through a workable, interoperable standard that preserves the potential for anonymous or pseudonymous experiences. It should avoid requiring collection or processing of additional personal information, treating all users like children, or impinging on the ability of adults to access information. More data-intrusive methods (such as verification with “hard identifiers” like government IDs) should be limited to high-risk services (e.g., alcohol, gambling, or pornography) or age correction. Moreover, age assurance requirements should permit online services to explore and adapt to improved technological approaches. In particular, requirements should enable new, privacy-protective ways to ensure users are at least the required age before engaging in certain activities. Finally, because age assurance technologies are novel, imperfect, and evolving, requirements should provide reasonable protection from liability for good-faith efforts to develop and implement improved solutions in this space.

Much like Facebook caving on FOSTA, this is Google caving on age verification and other “duty of care” approaches to regulating the way kids have access to the internet. It’s pulling up the ladder behind itself, knowing that it was able to grow without having to take these steps, and making sure that none of the up-and-coming challenges to Google’s position will have the same freedom to do so.

And, for what? So that Google can go to regulators and say “look, we’re not against regulations, here’s our framework”? But Google has smart policy people. They have to know how this plays out in reality. Just as with FOSTA, it completely backfired on Facebook (and the open internet). This approach will do the same.

Not only will these laws inevitably be used against the companies themselves, they’ll also be weaponized and modified by policymakers who will make them even worse and even more dangerous, all while pointing to Google’s “blessing” of this approach as an endorsement.

For years, Google had been somewhat unique in continuing to fight for the open internet long after many other companies were switching over to ladder pulling. There were hints that Google was going down this path in the past, but with this policy framework, the company has now made it clear that it has no intention of being a friend to the open internet any more.

Source: Google Decides To Pull Up The Ladder On The Open Internet, Pushes For Unconstitutional Regulatory Proposals | Techdirt

Well, with chrome only support, dns over https and browser privacy sandboxing, Google has been off the do no evil for some time and has been closing off the openness of the web by rebuilding or crushing competition for quite some time

Microsoft admits ‘power issue’ downed Azure in West Europe

Microsoft techies are trying to recover storage nodes for a “small” number of customers following a “power issue” on October 20 that triggered Azure service disruptions and ruined breakfast for those wanting to use hosted virtual machines or SQL DB.

The degradation began at 0731 UTC on Friday when Microsoft spotted the unspecified power problem, which affected infrastructure in one Availability Zone in the West Europe region. As such, businesses using VMs, Storage, App Service, or Cosmos and SQL DB suffered interruptions.

So what caused this unplanned downtime session? Microsoft says in an incident report on its Azure status history page: “Due to an upstream utility disturbance, we moved to generator power for a section of one datacenter at approximately 0731 UTC. A subset of those generators supporting that section failed to take over as expected during the switch over from utility power, resulting in the impact.”

Engineers managed to restore power again at around 0800 UTC and the impacted infrastructure began to clamber back online again. When the networking and storage plumbing recovered, compute scale units were brought into service, and for the “vast majority” the Azure services were accessible again from 0915 UTC.

Yet not everyone was up and running smoothly, Microsoft admitted.

“A small amount of storage nodes needs to be recovered manually, leading to delays in recovery for some services and customers. We are working to recover these nodes and will continue to communicate to these impacted customers directly via the Service Health blade in the Azure Portal.”

Source: Microsoft admits ‘power issue’ downed Azure in West Europe • The Register

Scientists create world’s most water-resistant surface

[…]

A research team in Finland, led by Robin Ras, from Aalto University, and aided by researchers from the University of Jyväskylä, has developed a mechanism to make water droplets slip off surfaces with unprecedented efficacy.

Cooking, transportation, optics and hundreds of other technologies are affected by how water sticks to surfaces or slides off them, and adoption of water-resistant surfaces in the future could improve many household and industrial technologies, such as plumbing, shipping and the auto industry.

The research team created solid silicon surfaces with a “liquid-like” outer layer that repels water by making droplets slide off surfaces. The highly mobile topcoat acts as a lubricant between the product and the water droplets.

The discovery challenges existing ideas about friction between solid surfaces and water, opening a new avenue for studying slipperiness at the molecular level.

Sakari Lepikko, the lead author of the study, which was published in Nature Chemistry on Monday, said: “Our work is the first time that anyone has gone directly to the nanometer-level to create molecularly heterogeneous surfaces.”

By carefully adjusting conditions, such as temperature and water content, inside a reactor, the team could fine-tune how much of the silicon surface the monolayer covered.

Ras said: “I find it very exciting that by integrating the reactor with an ellipsometer, that we can watch the self-assembled monolayers grow with extraordinary level of detail.

“The results showed more slipperiness when SAM [self-assembled monolayer] coverage was low or high, which are also the situations when the surface is most homogeneous. At low coverage, the silicon surface is the most prevalent component, and at high, SAMs are the most prevalent.”

Lepikko added: “It was counterintuitive that even low coverage yielded exceptional slipperiness.”

Using the new method, the team ended up creating the slipperiest liquid surface in the world.

According to Lepikko, the discovery promises to have implications wherever droplet-repellent surfaces are needed. This covers hundreds of examples from daily life to industrial environments.

[…]

“The main issue with a SAM coating is that it’s very thin, and so it disperses easily after physical contact. But studying them gives us fundamental scientific knowledge which we can use to create durable practical applications,” Lepikko said.

[…]

Source: Scientists create world’s most water-resistant surface | Materials science | The Guardian

AI and smart mouthguards: the new frontline in fight against brain injuries

There was a hidden spectator of the NFL match between the Baltimore Ravens and Tennessee Titans in London on Sunday: artificial intelligence. As crazy as it may sound, computers have now been taught to identify on-field head impacts in the NFL automatically, using multiple video angles and machine learning. So a process that would take 12 hours – for each game – is now done in minutes. The result? After every weekend, teams are sent a breakdown of which players got hit, and how often.

This tech wizardry, naturally, has a deeper purpose. Over breakfast the NFL’s chief medical officer, Allen Sills, explained how it was helping to reduce head impacts, and drive equipment innovation.

Players who experience high numbers can, for instance, be taught better techniques. Meanwhile, nine NFL quarterbacks and 17 offensive linemen are wearing position-specific helmets, which have significantly more padding in the areas where they experience more impacts.

What may be next? Getting accurate sensors in helmets, so the force of each tackle can also be estimated, is one area of interest. As is using biomarkers, such as saliva and blood, to better understand when to bring injured players back to action.

If that’s not impressive enough, this weekend rugby union became the first sport to adopt smart mouthguard technology, which flags big “hits” in real time. From January, whenever an elite player experiences an impact in a tackle or ruck that exceeds a certain threshold, they will automatically be taken off for a head injury assessment by a doctor.

No wonder Dr Eanna Falvey, World Rugby’s chief medical officer, calls it a “gamechanger” in potentially identifying many of the 18% of concussions that now come to light only after a match.

[…]

As things stand, World Rugby is adding the G-force and rotational acceleration of a hit to determine when to automatically take a player off for an HIA. Over the next couple of years, it wants to improve its ability to identify the impacts with clinical meaning – which will also mean looking at other factors, such as the duration and direction of the impact, as well.

[…]

Then there is the ability to use the smart mouthguard to track load over time. “It’s one thing to assist to identify concussions,” he says. “It’s another entirely to say it’s going to allow coaches and players to track exactly how many significant head impacts they have in a career – especially with all the focus on long-term health risks. If they can manage that load, particularly in training, that has performance and welfare benefits.”

[…]

Source: AI and smart mouthguards: the new frontline in fight against brain injuries | Sport | The Guardian

Spacecraft re-entry filling the atmosphere with metal vapor – and there will be more of it coming in

A group of scientists studying the effects of rocket and satellite reentry vaporization in Earth’s atmosphere have found some startling evidence that could point to disastrous environmental effects on the horizon.

The study, published in the Proceedings of the National Academy of Sciences, found that around 10 percent of large (>120 nm) sulfuric acid particles in the stratosphere contain aluminum and other elements consistent with the makeup of alloys used in spacecraft construction, including lithium, copper and lead. The other 90 percent comes from “meteoric smoke,” which are the particles left over when meteors vaporize during atmospheric entry, and that naturally-occurring share is expected to plummet drastically.

“The space industry has entered an era of rapid growth,” the boffins said in their paper, “with tens of thousands of small satellites planned for low earth orbit.

“It is likely that in the next few decades, the percentage of stratospheric sulfuric acid particles that contain aluminum and other metals from satellite reentry will be comparable to the roughly 50 percent that now contain meteoric metals,” the team concluded.

Atmospheric circulation at those altitudes (beginning somewhere between four and 12 miles above ground level and extending up to 31 miles above Earth) means such particles are unlikely to have an effect on the surface environment or human health, the researchers opined.

Stratospheric changes might be even scarier, though

Earth’s stratosphere has classically been considered pristine, said Dan Cziczo, one of the study’s authors and head of Purdue University’s department of Earth, atmospheric and planetary studies. “If something is changing in the stratosphere – this stable region of the atmosphere – that deserves a closer look.”

One of the major features of the stratosphere is the ozone layer, which protects Earth and its varied inhabitants from harmful UV radiation. It’s been harmed by human activity before action was taken, and an increase in aerosolized spacecraft particles could have several consequences to our planet.

One possibility is effects on the nucleation of ice and nitric acid trihydrate, which form in stratospheric clouds over Earth’s polar regions where currents in the mesosphere (the layer above the stratosphere) tend to deposit both meteoric and spacecraft aerosols.

Ice formed in the stratosphere doesn’t necessarily reach the ground, and is more likely to have effects on polar stratospheric clouds, lead author and National Oceanic and Atmospheric Administration scientists Daniel Murphy told The Register.

“Polar stratospheric clouds are involved in the chemistry of the ozone hole,” Murphy said. However, “it is too early to know if there is any impact on ozone chemistry,” he added

Along with changes in atmospheric ice formation and the ozone layer, the team said that more aerosols from vaporized spacecraft could change the stratospheric aerosol layer, something that scientists have proposed seeding in order to block more UV rays to fight the effects of global warming.

The materials being injected from spacecraft reentry is much smaller than amounts scientists have considered for intentional injection, Murphy told us. However, “intentional injection of exotic materials into the stratosphere could raise many of the same questions [as the paper] on an even bigger scale,” he noted.

[…]

Source: Spacecraft re-entry filling the atmosphere with metal vapor • The Register

Uncle Sam paid to develop a cancer drug and now one guy will get to charge whatever he wants for it

The argument for pharma patents: making new medicines is expensive, and medicines are how we save ourselves from cancer and other diseases. Therefore, we will award government-backed monopolies – patents – to pharma companies so they will have an incentive to invest their shareholders’ capital in research.

There’s plenty wrong with this argument. For one thing, pharma companies use their monopoly winnings to sell drugs, not invent drugs. For every dollar pharma spends on research, it spends three dollars on marketing:

https://www.bu.edu/sph/files/2015/05/Pharmaceutical-Marketing-and-Research-Spending-APHA-21-Oct-01.pdf

And that “R&D” isn’t what you’re thinking of, either. Most R&D spending goes to “evergreening” – coming up with minor variations on existing drugs in a bid to extend those patents for years or decades:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3680578/

Evergreening got a lot of attention recently when John Green rained down righteous fire upon Johnson & Johnson for their sneaky tricks to prevent poor people from accessing affordable TB meds, prompting this excellent explainer from the Arm and A Leg Podcast:

https://armandalegshow.com/episode/john-green-part-1/

Another thing those monopoly profits are useful for: “pay for delay,” where pharma companies bribe generic manufacturers not to make cheap versions of drugs whose patents have expired. Sure, it’s illegal, but that doesn’t stop ’em:

https://www.ftc.gov/news-events/topics/competition-enforcement/pay-delay

But it’s their money, right? If they want to spend it on bribes or evergreening or marketing, at least some of that money is going into drugs that’ll keep you and the people you love from enduring unimaginable pain or dying slowly and hard. Surely that warrants a patent.

Let’s say it does. But what about when a pharma company gets a patent on a life-saving drug that the public paid to develop, test and refine? Publicly funded work is presumptively in the public domain, from NASA R&D to the photos that park rangers shoot of our national parks. The public pays to produce this work, so it should belong to the public, right?

That was the deal – until Congress passed the Bayh-Dole Act in 1980. Under Bayh-Dole, government-funded inventions are given away – to for-profit corporations, who get to charge us whatever they want to access the things we paid to make. The basis for this is a racist hoax called “The Tragedy Of the Commons,” written by the eugenicist white supremacist Garrett Hardin and published by Science in 1968:

https://memex.craphound.com/2019/10/01/the-tragedy-of-the-commons-how-ecofascism-was-smuggled-into-mainstream-thought/

Hardin invented an imaginary history in which “commons” – things owned and shared by a community – are inevitably overrun by selfish assholes, a fact that prompts nice people to also overrun these commons, so as to get some value out of them before they are gobbled up by people who read Garrett Hardin essays.

Hardin asserted this as a historical fact, but he cited no instances in which it happened. But when the Nobel-winning Elinor Ostrom actually went and looked at how commons are managed, she found that they are robust and stable over long time periods, and are a supremely efficient way of managing resources:

https://pluralistic.net/2023/05/04/analytical-democratic-theory/#epistocratic-delusions

The reason Hardin invented an imaginary history of tragic commons was to justify enclosure: moving things that the public owned and used freely into private ownership. Or, to put it more bluntly, Hardin invented a pseudoscientific justification for giving away parks, roads and schools to rich people and letting them charge us to use them.

To arrive at this fantasy, Hardin deployed one of the most important analytical tools of modern economics: introspection. As Ely Devons put it: “If economists wished to study the horse, they wouldn’t go and look at horses. They’d sit in their studies and say to themselves, ‘What would I do if I were a horse?’”

https://pluralistic.net/2022/10/27/economism/#what-would-i-do-if-i-were-a-horse

Hardin’s hoax swept from the fringes to the center and became received wisdom – so much so that by 1980, Senators Birch Bayh and Bob Dole were able to pass a law that gave away publicly funded medicine to private firms, because otherwise these inventions would be “overgrazed” by greedy people, denying the public access to livesaving drugs.

On September 21, the NIH quietly published an announcement of one of these pharmaceutical transfers, buried in a list of 31 patent assignments in the Federal Register:

https://public-inspection.federalregister.gov/2023-20487.pdf

The transfer in question is a patent for using T-cell receptors (TCRs) to treat solid tumors from HPV, one of the only patents for treating solid tumors with TCRs. The beneficiary of this transfer is Scarlet TCR, a Delaware company with no website or SEC filings and ownership shrouded in mystery:

https://www.bizapedia.com/de/scarlet-tcr-inc.html

One person who pays attention to this sort of thing is James Love, co-founder of Knowledge Ecology International, a nonprofit that has worked for decades for access to medicines. Love sleuthed out at least one person behind Scarlet TCR: Christian Hinrichs, a researcher at Rutgers who used to work at the NIH’s National Cancer Institute:

https://www.nih.gov/research-training/lasker-clinical-research-scholars/tenured-former-scholars

Love presumes Hinrichs is the owner of Scarlet TCR, but neither the NIH nor Scarlet TCR nor Hinrichs will confirm it. Hinrichs was one of the publicly-funded researchers who worked on the new TCR therapy, for which he received a salary.

This new drug was paid for out of the public purse. The basic R&D – salaries for Hinrichs and his collaborators, as well as funding for their facilities – came out of NIH grants. So did the funding for the initial Phase I trial, and the ongoing large Phase II trial.

As David Dayen writes in The American Prospect, the proposed patent transfer will make Hinrichs a very wealthy man (Love calls it “generational wealth”):

https://prospect.org/health/2023-10-18-nih-how-to-become-billionaire-program/

This wealth will come by charging us – the public – to access a drug that we paid to produce. The public took all the risks to develop this drug, and Hinrichs stands to become a billionaire by reaping the rewards – rewards that will come by extracting fortunes from terrified people who don’t want to die from tumors that are eating them alive.

The transfer of this patent is indefensible. The government isn’t even waiting until the Phase II trials are complete to hand over our commonly owned science.

But there’s still time. The NIH is about to get a new director, Monica Bertagnolli – Hinrichs’s former boss – who will need to go before the Senate Health, Education, Labor and Pensions Committee for confirmation. Love is hoping that the confirmation hearing will present an opportunity to question Bertagnolli about the transfer – specifically, why the drug isn’t being nonexclusively licensed to lots of drug companies who will have to compete to sell the cheapest possible version.

Source: Pluralistic: Uncle Sam paid to develop a cancer drug and now one guy will get to charge whatever he wants for it (19 Oct 2023) – Pluralistic: Daily links from Cory Doctorow

Universal Music sues AI start-up Anthropic for scraping song lyrics – will they come after you for having read the lyrics or memorised the song next?

Universal Music has filed a copyright infringement lawsuit against artificial intelligence start-up Anthropic, as the world’s largest music group battles against chatbots that churn out its artists’ lyrics.

Universal and two other music companies allege that Anthropic scrapes their songs without permission and uses them to generate “identical or nearly identical copies of those lyrics” via Claude, its rival to ChatGPT.

When Claude is asked for lyrics to the song “I Will Survive” by Gloria Gaynor, for example, it responds with “a nearly word-for-word copy of those lyrics,” Universal, Concord, and ABKCO said in a filing with a US court in Nashville, Tennessee.

“This copyrighted material is not free for the taking simply because it can be found on the Internet,” the music companies said, while claiming that Anthropic had “never even attempted” to license their copyrighted work.

[…]

Universal earlier this year asked Spotify and other streaming services to cut off access to its music catalogue for developers using it to train AI technology.

Source: Universal Music sues AI start-up Anthropic for scraping song lyrics | Ars Technica

So don’t think about memorising or even listening to copyrighted material from them because apparently they will come after you with the mighty and crazy arm of the law!

Faster-Than-Light ‘Quasiparticles’ Touted as Futuristic Light Source

[…]But these light sources [needed to experiment in the quantum realm] are not common. They’re expensive to build, require large amounts of land, and can be booked up by scientists months in advance. Now, a team of physicists posit that quasiparticles—groups of electrons that behave as if they were one particle—can be used as light sources in smaller lab and industry settings, making it easier for scientists to make discoveries wherever they are. The team’s research describing their findings is published today in Nature Photonics.

“No individual particles are moving faster than the speed of light, but features in the collection of particles can, and do,” said John Palastro, a physicist at the Laboratory for Laser Energetics at the University of Rochester and co-author of the new study, in a video call with Gizmodo. “This does not violate any rules or laws of physics.”

[…]

In their paper, the team explores the possibility of making plasma accelerator-based light sources as bright as larger free electron lasers by making their light more coherent, vis-a-vis quasiparticles. The team ran simulations of quasiparticles’ properties in a plasma using supercomputers made available by the European High Performance Computing Joint Undertaking (EuroHPC JU), according to a University of Rochester release.

[…]

In a linear accelerator, “every electron is doing the same thing as the collective thing,” said Bernardo Malaca, a physicist at the Instituto Superior Técnico in Portugal and the study’s lead author, in a video call with Gizmodo. “There is no electron that’s undulating in our case, but we’re still making an undulator-like spectrum.”

The researchers liken quasiparticles to the Mexican wave, a popular collective behavior in which sports fans stand up and sit down in sequence. A stadium full of people can give the illusion of a wave rippling around the venue, though no one person is moving laterally.

“One is clearly able to see that the wave could in principle travel faster than any human could, provided the audience collaborates. Quasiparticles are very similar, but the dynamics can be more extreme,” said co-author Jorge Vieira, also a physicist at the Instituto Superior Técnico, in an email to Gizmodo. “For example, single particles cannot travel faster than the speed of light, but quasiparticles can travel at any velocity, including superluminal.”

“Because quasiparticles are a result of a collective behavior, there are no limits for its acceleration,” Vieira added. “In principle, this acceleration could be as strong as in the vicinity of a black-hole, for example.”

[…]

The difference between what is perceptually happening and actually happening regarding traveling faster than light is an “unneeded distinction,” Malaca said. “There are actual things that travel faster than light, which are not individual particles, but are waves or current profiles. Those travel faster than light and can produce real faster-than-light-ish effects. So you measure things that you only associate with superluminal particles.”

The group found that the electrons’ collective quality doesn’t have to be as pristine as the beams produced by large facilities, and could practically be implemented in more “table-top” settings, Palastro said. In other words, scientists could run experiments using very bright light sources on-site, instead of having to wait for an opening at an in-demand linear accelerator.

Source: Faster-Than-Light ‘Quasiparticles’ Touted as Futuristic Light Source

Code.org Presses Washington To Make Computer Science a High School Graduation Requirement – this should be everywhere globally

In July, Seattle-based and tech-backed nonprofit Code.org announced its 10th policy recommendation for all states “to require all students to take computer science (CS) to earn a high school diploma.” In August, Washington State Senator Lisa Wellman phoned-in her plans to introduce a bill to make computer science a Washington high school graduation requirement to the state’s Board of Education, indicating that the ChatGPT-sparked AI craze and Code.org had helped convince her of the need. Wellman, a former teacher who worked as a Programmer/System Analyst in the 80’s before becoming an Apple VP (Publishing) in the ’90s, also indicated that exposure to CS given to students in fifth grade could be sufficient to satisfy a HS CS requirement. In 2019, Wellman sponsored Microsoft-supported SB 5088 (Bill details), which required all Washington state public high schools to offer a CS class. Wellman also sponsored SB 5299 in 2021, which allows high school students to take a computer science elective in place of a third year math or science course (that may be required for college admission) to count towards graduation requirements.

And in October, Code.org CEO Hadi Partovi appeared before the Washington State Board of Education, driving home points Senator Wellman made in August with a deck containing slides calling for Washington to “require that all students take computer science to earn a high school diploma” and to “require computer science within all teacher certifications.” Like Wellman, Partovi suggested the CS high school requirement might be satisfied by middle school work (he alternatively suggested one year of foreign language could be dropped to accommodate a HS CS course). Partovi noted that Washington contained some of the biggest promoters of K-12 CS in Microsoft Philanthropies’ TEALS (TEALS founder Kevin Wang is a member of the Washington State Board of Education) and Code.org, as well some of the biggest funders of K-12 CS in Amazon and Microsoft — both which are $3,000,000+ Platinum Supporters of Code.org and have top execs on Code.org’s Board of Directors.

Source: Code.org Presses Washington To Make Computer Science a High School Graduation Requirement – Slashdot

Most kids have no clue how a computer works, let alone how to program one. It’s not difficult but an essential skill in today’s society.

IBM chip speeds up AI by combining processing and memory in the core

 

Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do — while consuming vastly less power.

“Its energy efficiency is just mind-blowing,” says Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau. The work, published in Science1, shows that computing and memory can be integrated on a large scale, he says. “I feel the paper will shake the common thinking in computer architecture.”

NorthPole runs neural networks: multi-layered arrays of simple computational units programmed to recognize patterns in data. A bottom layer takes in data, such as the pixels in an image; each successive layer detects patterns of increasing complexity and passes information on to the next layer. The top layer produces an output that, for example, can express how likely an image is to contain a cat, a car or other objects.

[…]

NorthPole is made of 256 computing units, or cores, each of which contains its own memory. “You’re mitigating the Von Neumann bottleneck within a core,” says Modha, who is IBM’s chief scientist for brain-inspired computing at the company’s Almaden research centre in San Jose.

The cores are wired together in a network inspired by the white-matter connections between parts of the human cerebral cortex, Modha says. This and other design principles — most of which existed before but had never been combined in one chip — enable NorthPole to beat existing AI machines by a substantial margin in standard benchmark tests of image recognition. It also uses one-fifth of the energy of state-of-the-art AI chips, despite not using the most recent and most miniaturized manufacturing processes. If the NorthPole design were implemented with the most up-to-date manufacturing process, its efficiency would be 25 times better than that of current designs, the authors estimate.

[…]

NorthPole brings memory units as physically close as possible to the computing elements in the core. Elsewhere, researchers have been developing more-radical innovations using new materials and manufacturing processes. These enable the memory units themselves to perform calculations, which in principle could boost both speed and efficiency even further.

Another chip, described last month2, does in-memory calculations using memristors, circuit elements able to switch between being a resistor and a conductor. “Both approaches, IBM’s and ours, hold promise in mitigating latency and reducing the energy costs associated with data transfers,”

[…]

Another approach, developed by several teams — including one at a separate IBM lab in Zurich, Switzerland3 — stores information by changing a circuit element’s crystal structure. It remains to be seen whether these newer approaches can be scaled up economically.

Source: ‘Mind-blowing’ IBM chip speeds up AI

Equifax poked with paltry $13.4 million following 147m customer data breach in 2017

Credit bureau company, Equifax, has been fined US$13.4 million by The Financial Conduct Authority (FCA), a UK financial watchdog, following its involvement in “one of the largest” data breaches ever.

This cyber security incident took place in 2017 and saw Equifax’s US-based parent company, Equifax Inc., suffer a data breach that saw the personal data of up to 147.9 million customers accessed by malicious actors during the hack. The FCA also revealed that, as this data was stored in company servers in the US, the hack also exposed the personal data of 13.8 million UK customers.

The data accessed during the hack included Equifax membership login details, customer names, dates of birth, partial credit card details and addresses.

According the FCA, the cyber attack and subsequent data breach was “entirely preventable” and exposed UK customers to financial crime.
“There were known weaknesses in Equifax Inc’s data security systems and Equifax failed to take appropriate action in response to protect UK customer data,” the FCA explained.

The authority also noted that the UK arm of Equifax was not made aware that malicious actors had been accessed during the hack until six weeks after the cyber security incident was discovered by Equifax Inc.

The company was fined $60,727 by the British Information Commissioner’s Office (ICO) relating to the data breach in 2018.

On October 13th, Equifax stated that it had fully cooperated with the FCA during the investigation, which has been extensive. The FCA also said that the fine levelled at Equifax Inc had been reduced following the company’s agreement to cooperate with the watchdog and resolve the cyber attack.

Patricio Remon, president for Europe at Equifax, said that since the cyber attack against Equifax in 2017, the company has “invested over $1.5 billion in a security and technology transformation”. Remon also said that “few companies have invested more time and resources than Equifax to ensure that consumers’ information is protected”.

Source: Equifax fined $13.4 million following data breach

Cisco Can’t Stop Using Hard-Coded Passwords

There’s a new Cisco vulnerability in its Emergency Responder product:

This vulnerability is due to the presence of static user credentials for the root account that are typically reserved for use during development. An attacker could exploit this vulnerability by using the account to log in to an affected system. A successful exploit could allow the attacker to log in to the affected system and execute arbitrary commands as the root user.

This is not the first time Cisco products have had hard-coded passwords made public. You’d think it would learn.

Source: Cisco Can’t Stop Using Hard-Coded Passwords – Schneier on Security

Google’s AI stoplight program leads to less stops, less emissions

It’s been two years since Google first debuted Project Green Light, a novel means of addressing the street-level pollution caused by vehicles idling at stop lights.

[…]

Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there.

[…]

When the program was first announced in 2021, it had only been pilot tested in four intersections in Israel in partnership with the Israel National Roads Company but Google had reportedly observed a “10 to 20 percent reduction in fuel and intersection delay time” during those tests. The pilot program has grown since then, spreading to a dozen partner cities around the world, including Rio de Janeiro, Brazil; Manchester, England and Jakarta, Indonesia.

“Today we’re happy to share that… we plan to scale to more cities in 2024,” Yael Maguire, Google VP of Geo Sustainability, told reporters during a pre-brief event last week. “Early numbers indicate a potential for us to see a 30 percent reduction in stops.

[…]

“Our AI recommendations work with existing infrastructure and traffic systems,” Maguire continued. “City engineers are able to monitor the impact and see results within weeks.” Maguire also noted that the Manchester test reportedly saw improvements to emission levels and air quality rise by as much as 18 percent. The company also touted the efficacy of its Maps routing in reducing emissions, with Maguire pointing out at it had “helped prevent more than 2.4 million metric tons of carbon emissions — the equivalent of taking about 500,000 fuel-based cars off the road for an entire year.”

Source: Google’s AI stoplight program is now calming traffic in a dozen cities worldwide

WHO Reccomends cheap malaria vaccine

The vaccine has been developed by the University of Oxford and is only the second malaria vaccine to be developed.

Malaria kills mostly babies and infants, and has been one of the biggest scourges on humanity.

There are already agreements in place to manufacture more than 100 million doses a year.

It has taken more than a century of scientific effort to develop effective vaccines against malaria.

The disease is caused by a complex parasite, which is spread by the bite of blood-sucking mosquitoes. It is far more sophisticated than a virus as it hides from our immune system by constantly shape-shifting inside the human body.

[…]

The WHO said the effectiveness of the two vaccines was “very similar” and there was no evidence one was better than the other.

However, the key difference is the ability to manufacture the University of Oxford vaccine – called R21 – at scale.

The world’s largest vaccine manufacturer – the Serum Institute of India – is already lined up to make more than 100 million doses a year and plans to scale up to 200 million doses a year.

So far there are only 18 million doses of RTS,S.

The WHO said the new R21 vaccine would be a “vital additional tool”. Each dose costs $2-4 (£1.65 to £3.30) and four doses are needed per person. That is about half the price of RTS,S.

[…]

That makes it hard to build up immunity naturally through catching malaria, and difficult to develop a vaccine against it.

It is almost two years to the day since the first vaccine – called RTS,S and developed by GSK – was backed by the WHO.

Source: Malaria vaccine big advance against major child killer – BBC News

Adobe previews AI upscaling to make blurry videos and GIFs look fresh

Adobe has developed an experimental AI-powered upscaling tool that greatly improves the quality of low-resolution GIFs and video footage. This isn’t a fully-fledged app or feature yet, and it’s not yet available for beta testing, but if the demonstrations seen by The Verge are anything to go by then it has some serious potential.

Adobe’s “Project Res-Up” uses diffusion-based upsampling technology (a class of generative AI that generates new data based on the data it’s trained on) to increase video resolution while simultaneously improving sharpness and detail.

In a side-by-side comparison that shows how the tool can upscale video resolution, Adobe took a clip from The Red House (1947) and upscaled it from 480 x 360 to 1280 x 960, increasing the total pixel count by 675 percent. The resulting footage was much sharper, with the AI removing most of the blurriness and even adding in new details like hair strands and highlights. The results still carried a slightly unnatural look (as many AI video and images do) but given the low initial video quality, it’s still an impressive leap compared to the upscaling on Nvidia’s TV Shield or Microsoft’s Video Super Resolution.

The footage below provided by Adobe matches what I saw in the live demonstration:

A clip from a black and white movie called The Red House (1947) featuring a young man and woman.
[Left: original, Right: upscaled] Running this clip from The Red House (1947) through Project Res-Up removes most of the blur and makes details like the character’s hair and eyes much sharper.Image: The Red House (1947) / United Artists / Adobe

Another demonstration showed a video being cropped to focus on a baby elephant, with the upscaling tool similarly boosting the low-resolution crop and eradicating most of the blur while also adding little details like skin wrinkles. It really does look as though the tool is sharpening low-contrast details that can’t be seen in the original footage. Impressively, the artificial wrinkles move naturally with the animal without looking overly artificial. Adobe also showed Project Res-Up upscaling GIFs to breathe some new life into memes you haven’t used since the days of MySpace.

A side-by-side comparison of baby elephant video footage.
[Left: original, Right: upscaled] Additional texture has been applied to this baby elephant to make the upscaled footage appear more natural and lifelike.Image: Adobe

The project will be revealed during the “Sneaks” section of the Adobe Max event later today, which the creative software giant uses to showcase future technologies and ideas that could potentially join Adobe’s product lineup. That means you won’t be able to try out Project Res-Up on your old family videos (yet) but its capabilities could eventually make their way into popular editing apps like Adobe Premiere Pro or Express. Previous Adobe Sneaks have since been released as apps and features, like Adobe Fresco and Photoshop’s content-aware tool.

Source: Adobe previews AI upscaling to make blurry videos and GIFs look fresh – The Verge

Climate crisis will make Europe’s beer cost more and taste worse

Climate breakdown is already changing the taste and quality of beer, scientists have warned.

The quantity and quality of hops, a key ingredient in most beers, is being affected by global heating, according to a study. As a result, beer may become more expensive and manufacturers will have to adapt their brewing methods.

Researchers forecast that hop yields in European growing regions will fall by 4-18% by 2050 if farmers do not adapt to hotter and drier weather, while the content of alpha acids in the hops, which gives beers their distinctive taste and smell, will fall by 20-31%.

“Beer drinkers will definitely see the climate change, either in the price tag or the quality,” said Miroslav Trnka, a scientist at the Global Change Research Institute of the Czech Academy of Sciences and co-author of the study, published in the journal Nature Communications. “That seems to be inevitable from our data.”

Beer, the third-most popular drink in the world after water and tea, is made by fermenting malted grains like barley with yeast. It is usually flavoured with aromatic hops grown mostly in the middle latitudes that are sensitive to changes in light, heat and water.

[…]

Source: Climate crisis will make Europe’s beer cost more and taste worse, say scientists | Europe | The Guardian

Microplastics detected in clouds hanging atop two Japanese mountains

[…]

The clouds around Japan’s Mount Fuji and Mount Oyama contain concerning levels of the tiny plastic bits, and highlight how the pollution can be spread long distances, contaminating the planet’s crops and water via “plastic rainfall”.

The plastic was so concentrated in the samples researchers collected that it is thought to be causing clouds to form while giving off greenhouse gasses.

“If the issue of ‘plastic air pollution’ is not addressed proactively, climate change and ecological risks may become a reality, causing irreversible and serious environmental damage in the future,” the study’s lead author, Hiroshi Okochi, a professor at Waseda University, said in a statement.

The peer-reviewed paper was published in Environmental Chemistry Letters, and the authors believe it is the first to check clouds for microplastics.

[…]

Waseda researchers gathered samples at altitudes ranging between 1,300-3,776 meters, which revealed nine types of polymers, like polyurethane, and one type of rubber. The cloud’s mist contained about 6.7 to 13.9 pieces of microplastics per litre, and among them was a large volume of “water loving” plastic bits, which suggests the pollution “plays a key role in rapid cloud formation, which may eventually affect the overall climate”, the authors wrote in a press release.

That is potentially a problem because microplastics degrade much faster when exposed to ultraviolet light in the upper atmosphere, and give off greenhouse gasses as they do. A high concentration of these microplastics in clouds in sensitive polar regions could throw off the ecological balance, the authors wrote.

The findings highlight how microplastics are highly mobile and can travel long distances through the air and environment. Previous research has found the material in rain, and the study’s authors say the main source of airborne plastics may be seaspray, or aerosols, that are released when waves crash or ocean bubbles burst. Dust kicked up by cars on roads is another potential source, the authors wrote.

Source: Microplastics detected in clouds hanging atop two Japanese mountains