Virgin Galactic successfully flies tourists to space for first time

Virgin Galactic’s VSS Unity, the reusable rocket-powered space plane carrying the company’s first crew of tourists to space, successfully launched and landed on Thursday.

The mission, known as Galactic 02, took off shortly after 11am ET from Spaceport America in New Mexico.

Aboard the spacecraft were six individuals total – the space plane’s commander and former Nasa astronaut CJ Sturckow, the pilot Kelly Latimer, as well as Beth Moses, Virgin Galactic’s chief astronaut instructor who trained the crew before to the flight.

The spacecraft also carryied three private passengers, including the health and wellness coach Keisha Schahaff and her 18-year-old daughter, Anastasia Mayers, both of whom are Antiguan.

According to Space.com, Schahaff won her seat aboard the Galactic 02 as part of a fundraising competition by Space for Humanity, a non-profit organization seeking to democratize space travel. Mayers is studying philosophy and physics at Aberdeen University in Scotland. Together, Schahaff and Mayers are the first mother-daughter duo to venture to space together.

[…]

Source: Virgin Galactic successfully flies tourists to space for first time | Virgin Galactic | The Guardian

Canon is getting away with printers that won’t scan without ink — but HP might pay

Were you hoping Canon might be held accountable for its all-in-one printers that mysteriously can’t scan when they’re low on ink, forcing you to buy more? Tough: the lawsuit we told you about last year quietly ended in a private settlement rather than becoming a big class-action.

I just checked, and a judge already dismissed David Leacraft’s lawsuit in November, without Canon ever being forced to show what happens when you try to scan without a full ink cartridge. (Numerous Canon customer support reps wrote that it simply doesn’t work.)

Here’s the good news: HP, an even larger and more shameless manufacturer of printers, is still possibly facing down a class-action suit for the same practice.

As Reuters reports, a judge has refused to dismiss a lawsuit by Gary Freund and Wayne McMath that alleges many HP printers won’t scan or fax documents when their ink cartridges report that they’ve run low.

[…]

Interestingly, neither Canon nor HP spent any time trying to argue their printers do scan when they’re low on ink in the lawsuit responses I’ve read. Perhaps they can’t deny it? Epson, meanwhile, has an entire FAQ dedicated to reassuring customers that it hasn’t pulled that trick since 2008. (Don’t worry, Epson has other forms of printer enshittification.)

[…]

Source: Canon is getting away with printers that won’t scan sans ink — but HP might pay

CNET Deletes Thousands of Old Articles to Game Google Search

Tech news website CNET has deleted thousands of old articles over the past few months in a bid to improve its performance in Google Search results, Gizmodo has learned.

Archived copies of CNET’s author pages show the company deleted small batches of articles prior to the second half of July, but then the pace increased. Thousands of articles disappeared in recent weeks. A CNET representative confirmed that the company was culling stories but declined to share exactly how many it has taken down.

[…]

Taylor Canada, CNET’s senior director of marketing and communications. “In an ideal world, we would leave all of our content on our site in perpetuity. Unfortunately, we are penalized by the modern internet for leaving all previously published content live on our site.”

[…]

CNET shared an internal memo about the practice. Removing, redirecting, or refreshing irrelevant or unhelpful URLs “sends a signal to Google that says CNET is fresh, relevant and worthy of being placed higher than our competitors in search results,” the document reads.

According to the memo about the “content pruning,” the company considers a number of factors before it “deprecates” an article, including SEO, the age and length of the story, traffic to the article, and how frequently Google crawls the page. The company says it weighs historical significance and other editorial factors before an article is taken down. When an article is slated for deletion, CNET says it maintains its own copy, and sends the story to the Internet Archive’s Wayback Machine.

[…]

Google does not recommend deleting articles just because they’re considered “older,” said Danny Sullivan, the company’s Public Liaison for Google Search. In fact, the practice is something Google has advised against for years. After Gizmodo’s request for comment, Sullivan posted a series of tweets on the subject.

“Are you deleting content from your site because you somehow believe Google doesn’t like ‘old’ content? That’s not a thing! Our guidance doesn’t encourage this,” Sullivan tweeted.

[…]

However, SEO experts told Gizmodo content pruning can be a useful strategy in some cases, but it’s an “advanced” practice that requires high levels of expertise,[…]

Ideally outdated pages should be updated or redirected to a more relevant URL, and deleting content without a redirect should be a last resort. With fewer irrelevant pages on your site, the idea is that Google’s algorithms will be able to index and better focus on the articles or pages a publisher does want to promote.

Google may have an incentive to withhold details about its Search algorithm, both because it would rather be able to make its own decisions about how to rank websites, and because content pruning is a delicate process that can cause problems for publishers—and for Google—if it’s mishandled.

[…]

Whether or not deleting articles is an effective business strategy, it causes other problems that have nothing to do with search engines. For a publisher like CNET — one of the oldest tech news sites on the internet — removing articles means losing parts of the public record that could have unforeseen historical significance in the future.

[…]

Source: CNET Deletes Thousands of Old Articles to Game Google Search

That’s a big chunk of history gone there

Nearly every AMD CPU since 2017 vulnerable to Inception bug

AMD processor users, you have another data-leaking vulnerability to deal with: like Zenbleed, this latest hole can be to steal sensitive data from a running vulnerable machine.

The flaw (CVE-2023-20569), dubbed Inception in reference to the Christopher Nolan flick about manipulating a person’s dreams to achieve a desired outcome in the real world, was disclosed by ETH Zurich academics this week.

And yes, it’s another speculative-execution-based side-channel that malware or a rogue logged-in user can abuse to obtain passwords, secrets, and other data that should be off limits.

Inception utilizes a previously disclosed vulnerability alongside a novel kind of transient execution attack, which the researchers refer to as training in transient execution (TTE), to leak information from an operating system kernel at a rate of 39 bytes per second on vulnerable hardware. In this case, vulnerable systems encompasses pretty much AMD’s entire CPU lineup going back to 2017, including its latest Zen 4 Epyc and Ryzen processors.

Despite the potentially massive blast radius, AMD is downplaying the threat while simultaneously rolling out microcode updates for newer Zen chips to mitigate the risk. “AMD believes this vulnerability is only potentially exploitable locally, such as via downloaded malware,” the biz said in a public disclosure, which ranks Inception “medium” in severity.

Intel processors weren’t found to be vulnerable to Inception, but that doesn’t mean they’re entirely in the clear. Chipzilla is grappling with its own separate side-channel attack disclosed this week called Downfall.

How Inception works

As we understand it, successful exploitation of Inception takes advantage of the fact that in order for modern CPUs to achieve the performance they do, processor cores have to cut corners.

Rather than executing instructions strictly in order, the CPU core attempts to predict which ones will be needed and runs those out of sequence if it can, a technique called speculative execution. If the core guesses incorrectly, it discards or unwinds the computations it shouldn’t have done. That allows the core to continue getting work done without having to wait around for earlier operations to complete. Executing these instructions speculatively is also known as transient execution, and when this happens, a transient window is opened.

Normally, this process renders substantial performance advantages, and refining this process is one of several ways CPU designers eke out instruction-per-clock gains generation after generation. However, as we’ve seen with previous side-channel attacks, like Meltdown and Spectre, speculative execution can be abused to make the core start leaking information it otherwise shouldn’t to observers on the same box.

Inception is a fresh twist on this attack vector, and involves two steps. The first takes advantage of a previously disclosed vulnerability called Phantom execution (CVE-2022-23825) which allows an unprivileged user to trigger a misprediction — basically making the core guess the path of execution incorrectly — to create a transient execution window on demand.

This window serves as a beachhead for a TTE attack. Instead of leaking information from the initial window, the TTE injects new mispredictions, which trigger more future transient windows. This, the researchers explain, causes an overflow in the return stack buffer with an attacker-controlled target.

“The result of this insight is Inception, an attack that leaks arbitrary data from an unprivileged process on all AMD Zen CPUs,” they wrote.

In a video published alongside the disclosure, and included below, the Swiss team demonstrate this attack by leaking the root account hash from /etc/shadow on a Zen 4-based Ryzen 7700X CPU with all Spectre mitigations enabled.

You can find a more thorough explanation of Inception, including the researchers’ methodology in a paper here [PDF]. It was written by Daniël Trujillo, Johannes Wikner, and Kaveh Razavi, of ETH Zurich. They’ve also shared proof-of-concept exploit code here.

Source: Nearly every AMD CPU since 2017 vulnerable to Inception bug • The Register

‘We’re changing the clouds.’ An unintended test of geoengineering is fueling record ocean warmth

[…]

researchers are now waking up to another factor, one that could be filed under the category of unintended consequences: disappearing clouds known as ship tracks. Regulations imposed in 2020 by the United Nations’s International Maritime Organization (IMO) have cut ships’ sulfur pollution by more than 80% and improved air quality worldwide. The reduction has also lessened the effect of sulfate particles in seeding and brightening the distinctive low-lying, reflective clouds that follow in the wake of ships and help cool the planet. The 2020 IMO rule “is a big natural experiment,” says Duncan Watson-Parris, an atmospheric physicist at the Scripps Institution of Oceanography. “We’re changing the clouds.”

By dramatically reducing the number of ship tracks, the planet has warmed up faster, several new studies have found. That trend is magnified in the Atlantic, where maritime traffic is particularly dense. In the shipping corridors, the increased light represents a 50% boost to the warming effect of human carbon emissions. It’s as if the world suddenly lost the cooling effect from a fairly large volcanic eruption each year, says Michael Diamond, an atmospheric scientist at Florida State University.

The natural experiment created by the IMO rules is providing a rare opportunity for climate scientists to study a geoengineering scheme in action—although it is one that is working in the wrong direction. Indeed, one such strategy to slow global warming, called marine cloud brightening, would see ships inject salt particles back into the air, to make clouds more reflective. In Diamond’s view, the dramatic decline in ship tracks is clear evidence that humanity could cool off the planet significantly by brightening the clouds. “It suggests pretty strongly that if you wanted to do it on purpose, you could,” he says.

The influence of pollution on clouds remains one of the largest sources of uncertainty in how quickly the world will warm up, says Franziska Glassmeier, an atmospheric scientist at the Delft University of Technology. Progress on understanding these complex interactions has been slow. “Clouds are so variable,” Glassmeier says.

Some of the basic science is fairly well understood. Sulfate or salt particles seed clouds by creating nuclei for vapor to condense into droplets. The seeds also brighten existing clouds by creating smaller, more numerous droplets. The changes don’t stop there, says Robert Wood, an atmospheric scientist at the University of Washington. He notes that smaller droplets are less likely to merge with others, potentially suppressing rainfall. That would increase the size of clouds and add to their brightening effect. But modeling also suggests that bigger clouds are more likely to mix with dry air, which would reduce their reflectivity.

[…]

Source: ‘We’re changing the clouds.’ An unintended test of geoengineering is fueling record ocean warmth | Science | AAAS

The Fear Of AI and Entitled Cancel Culture Just Killed A Very Useful Tool: Prosecraft

I do understand why so many people, especially creative folks, are worried about AI and how it’s used. The future is quite unknown, and things are changing very rapidly, at a pace that can feel out of control. However, when concern and worry about new technologies and how they may impact things morphs into mob-inspiring fear, dumb things happen. I would much rather that when we look at new things, we take a more realistic approach to them, and look at ways we can keep the good parts of what they provide, while looking for ways to mitigate the downsides.

Hopefully without everyone going crazy in the meantime. Unfortunately, that’s not really the world we live in.

Last year, when everyone was focused on generative AI for images, we had Rob Sheridan on the podcast to talk about why it was important for creative people to figure out how to embrace the technology rather than fear it. The opening story of the recent NY Times profile of me was all about me in a group chat, trying to suggest to some very creative Hollywood folks how to embrace AI rather than simply raging against it. And I’ve already called out how folks rushing to copyright, thinking that will somehow “save” them from AI, are barking up the wrong tree.

But, in the meantime, the fear over AI is leading to some crazy and sometimes unfortunate outcomes. Benji Smith, who created what appears to be an absolutely amazing tool for writers, Shaxpir, also created what looked like an absolutely fascinating tool called Prosecraft, that had scanned and analyzed a whole bunch of books and would let you call up really useful data on books.

He created it years ago, based on an idea he had years earlier, trying to understand the length of various books (which he initially kept in a spreadsheet). As Smith himself describes in a blog post:

I heard a story on NPR about how Kurt Vonnegut invented an idea about the “shapes of stories” by counting happy and sad words. The University of Vermont “Computational Story Lab” published research papers about how this technique could show the major plot points and the “emotional story arc” of the Harry Potter novels (as well as many many other books).

So I tried it myself and found that I could plot a graph of the emotional ups and downs of any story. I added those new “sentiment analysis” tools to the prosecraft website too.

When I ran out of books on my own shelves, I looked to the internet for more text that I could analyze, and I used web crawlers to find more books. I wanted to be mindful of the diversity of different stories, so I tried to find books by authors of every race and gender, from every different cultural and political background, writing in every different genre and exploring all different kinds of themes. Fiction and nonfiction and philosophy and science and religion and culture and politics.

Somewhere out there on the internet, I thought to myself, there was a new author writing a horror or romance or fantasy novel, struggling for guidance about how long to write their stories, how to write more vivid prose, and how much “passive voice” was too much or too little.

I wanted to give those budding storytellers a suite of “lexicographic” tools that they could use, to compare their own writing with the writing of authors they admire. I’ve been working in the field of computational linguistics and machine learning for 20+ years, and I was always frustrated that the fancy tools were only accessible to big businesses and government spy agencies. I wanted to bring that magic to everyone.

Frankly, all of that sounds amazing. And amazingly useful. Even more amazing is that he built it, and it worked. It would produce useful analysis of books, such as this example from Alice’s Adventures in Wonderland:

And, it could also do further analysis like the following:

This is all quite interesting. It’s also the kind of thing that data scientists do on all kinds of work for useful purposes.

Smith built Prosecraft into Shaxpir, again, making it a more useful tool. But, on Monday, some authors on the internet found out about it and lost their shit, leading Smith to shut the whole project down.

There seems to be a lot of misunderstanding about all of this. Smith notes that he had researched the copyright issues and was sure he wasn’t violating anything, and he’s right. We’ve gone over this many times before. Scanning books is pretty clearly fair use. What you do with that later could violate copyright law, but I don’t see anything that Prosecraft did that comes anywhere even remotely close to violating copyright law.

But… some authors got pretty upset about all of it.

I’m still perplexed at what the complaint is here? You don’t need to “consent” for someone to analyze your book. You don’t need to “consent” to someone putting up statistics about their analysis of your book.

But, Zach’s tweet went viral with a bunch of folks ready to blow up anything that smacks of tech bro AI, and lots of authors started yelling at Smith.

The Gizmodo article has a ridiculously wrong “fair use” analysis, saying “Fair Use does not, by any stretch of the imagination, allow you to use an author’s entire copyrighted work without permission as a part of a data training program that feeds into your own ‘AI algorithm.’” Except… it almost certainly does? Again, we’ve gone through this with the Google Book scanning case, and the courts said that you can absolutely do that because it’s transformative.

It seems that what really tripped up people here was the “AI” part of it, and the fear that this was just another a VC funded “tech bro” exercise of building something to get rich by using the works of creatives. Except… none of that is accurate. As Smith explained in his blog post:

For what it’s worth, the prosecraft website has never generated any income. The Shaxpir desktop app is a labor of love, and during most of its lifetime, I’ve worked other jobs to pay the bills while trying to get the company off the ground and solve the technical challenges of scaling a startup with limited resources. We’ve never taken any VC money, and the whole company is a two-person operation just working our hardest to serve our small community of authors.

He also recognizes that the concerns about it being some “AI” thing are probably what upset people, but plenty of authors have found the tool super useful, and even added their own books:

I launched the prosecraft website in the summer of 2017, and I started showing it off to authors at writers conferences. The response was universally positive, and I incorporated the prosecraft analytic tools into the Shaxpir desktop application so that authors could privately run these analytics on their own works-in-progress (without ever sharing those analyses publicly, or even privately with us in our cloud).

I’ve spent thousands of hours working on this project, cleaning up and annotating text, organizing and tweaking things. A small handful of authors have even reached out to me, asking to have their books added to the website. I was grateful for their enthusiasm.

But in the meantime, “AI” became a thing.

And the arrival of AI on the scene has been tainted by early use-cases that allow anyone to create zero-effort impersonations of artists, cutting those creators out of their own creative process.

That’s not something I ever wanted to participate in.

Smith took the project down entirely because of that. He doesn’t want to get lumped in with other projects, and even though his project is almost certainly legal, he recognized that this was becoming an issue:

Today the community of authors has spoken out, and I’m listening. I care about you, and I hear your objections.

Your feelings are legitimate, and I hope you’ll accept my sincerest apologies. I care about stories. I care about publishing. I care about authors. I never meant to hurt anyone. I only hoped to make something that would be fun and useful and beautiful, for people like me out there struggling to tell their own stories.

I find all of this really unfortunate. Smith built something really cool, really amazing, that does not, in any way, infringe on anyone’s rights. I get the kneejerk reaction from some authors, who feared that this was some obnoxious project, but couldn’t they have taken 10 minutes to look at the details of what it was they were killing?

I know we live in an outrage era, where the immediate reaction is to turn the outrage meter up to 11. I’m certainly guilty of that at times myself. But this whole incident is just sad. It was an overreaction from the start, destroying what had been a clear labor of love and a useful project, through misleading and misguided attacks from authors.

Source: The Fear Of AI Just Killed A Very Useful Tool | Techdirt

Preservation Fail: Hasbro Wants Old ‘Transformers’ Games Re-Released, Except Activision Might Have Lost Them

And here we go again. we’ve been talking about how copyright has gotten in the way of cultural preservation generally for a while, and more specifically lately when it comes to the video game industry. The way this problem manifests itself is quite simple: video game publishers support the games they release for some period of time and then they stop. When they stop, depending on the type of game, it can make that game unavailable for legitimate purchase or use, either because the game is disappeared from retail and online stores, or because the servers needed to make them operational are taken offline. Meanwhile, copyright law prevents individuals and, in some cases, institutions from preserving and making those games available to the public, a la a library or museum would.

When you make these preservation arguments, one of the common retorts you get from the gaming industry and its apologists is that publishers already preserve these games for eventual re-release down the road, which is why they need to maintain their copyright protection on that content. We’ve pointed out failures to do so by the industry in the past, but the story about Hasbro wanting to re-release several older Transformers video games, but can’t, is about as perfect an example as I can find.

Released in June 2010, Transformers: War for Cybertron was a well-received third-person shooter that got an equally great sequel in 2012, Fall of Cybertron. (And then in 2014 we got Rise of Dark Spark, which wasn’t very good and was tied into the live-action films.) What made the first two games so memorable and beloved was that they told their own stories about the origins of popular characters like Megatron and Optimus Prime while featuring kick-ass combat that included the ability to transform into different vehicles. Sadly, in 2018, all of these Activision-published Transformers games (and several it commissioned from other developers) were yanked from digital stores, making them hard to acquire and play in 2023. It seems that Hasbro now wants that to change, suggesting the games could make a perfect fit for Xbox Game Pass, once Activision, uh…finds them.

You read that right: finds them. What does that mean? Well, when Hasbro came calling to Activision looking to see if this was a possibility, it devolved into Activision doing a theatrical production parody called Dude, Where’s My Hard Drive? It seems that these games may or may not exist on some piece of hardware, but Activision literally cannot find it. Or maybe not, as you’ll read below. There seems to be some confusion about what Activision can and cannot find.

And, yes, the mantra in the comments that pirate sites are essentially solving for this problem certainly applies here as well. So much so, in fact, that it sure sounds like Hasbro went that route to get what it needed for the toy design portion of this.

Interestingly, Activision’s lack of organization seems to have caused some headaches for Hasbro’s toy designers who are working on the Gamer Edition figures. The toy company explained that it had to load up the games on their original platforms and play through them to find specific details they wanted to recreate for the toys.

“For World of Cybertron we had to rip it ourselves, because [Activision] could not find it—they kept sending concept art instead, which we didn’t want,” explained Hasbro. “So we booted up an old computer and ripped them all out from there. Which was a learning experience and a long weekend, because we just wanted to get it right, so that’s why we did it like that.

What’s strange is that despite the above, Activision responded to initial reports of all this indicating that the headlines were false and it does have… code. Or something.

Hasbro itself then followed up apologizing for the confusion, also saying that it made an error in stating the games were “lost”. But what’s strange about all that, in addition to the work that Hasbro did circumventing having access to the actual games themselves, is the time delta it took for Activision to respond to all of this.

Activision has yet to confirm if it actually knows where the source code for the games is specifically located. I also would love to know why Activision waited so long to comment (the initial interview was posted on July 28) and why Hasbro claimed to not have access to key assets when developing its toys based on the games.

It’s also strange that Hasbro, which says it wants to put these games on Game Pass, hasn’t done so for years now. If the games aren’t lost, give ‘em to Hasbro, then?

Indeed. If this was all a misunderstanding, so be it. But if this was all pure misunderstanding, the rest of the circumstances surrounding this story don’t make a great deal of sense. At the very least, it sounds like some of the concern that these games could have simply been lost to the world is concerning and yet another data point for an industry that simply needs to do better when it comes to preservation efforts.

Source: Preservation Fail: Hasbro Wants Old ‘Transformers’ Games Re-Released, Except Activision Might Have Lost Them | Techdirt

Gravity Changes how it works at low acceleration shown by observations of widely seperated binary stars

A new study reports conclusive evidence for the breakdown of standard gravity in the low acceleration limit from a verifiable analysis of the orbital motions of long-period, widely separated, binary stars, usually referred to as wide binaries in astronomy and astrophysics.

The study carried out by Kyu-Hyun Chae, professor of physics and astronomy at Sejong University in Seoul, used up to 26,500 wide binaries within 650 (LY) observed by European Space Agency’s Gaia space telescope. The study was published in the 1 August 2023 issue of the Astrophysical Journal.

For a key improvement over other studies Chae’s study focused on calculating gravitational accelerations experienced by as a function of their separation or, equivalently the orbital period, by a Monte Carlo deprojection of observed sky-projected motions to the three-dimensional space.

Chae explains, “From the start it seemed clear to me that could be most directly and efficiently tested by calculating accelerations because itself is an acceleration. My recent research experiences with galactic rotation curves led me to this idea. Galactic disks and wide binaries share some similarity in their orbits, though wide binaries follow highly elongated orbits while hydrogen gas particles in a galactic disk follow nearly circular orbits.”

Also, unlike other studies Chae calibrated the occurrence rate of hidden nested inner binaries at a benchmark acceleration.

The study finds that when two stars orbit around with each other with accelerations lower than about one nanometer per second squared start to deviate from the prediction by Newton’s universal law of gravitation and Einstein’s general relativity.

For accelerations lower than about 0.1 nanometer per second squared, the observed acceleration is about 30 to 40% higher than the Newton-Einstein prediction. The significance is very high meeting the conventional criteria of 5 sigma for a scientific discovery. In a sample of 20,000 wide binaries within a distance limit of 650 LY two independent acceleration bins respectively show deviations of over 5 sigma significance in the same direction.

Because the observed accelerations stronger than about 10 nanometer per second squared agree well with the Newton-Einstein prediction from the same analysis, the observed boost of accelerations at lower accelerations is a mystery. What is intriguing is that this breakdown of the Newton-Einstein theory at accelerations weaker than about one nanometer per second squared was suggested 40 years ago by theoretical physicist Mordehai Milgrom at the Weizmann Institute in Israel in a new theoretical framework called modified Newtonian dynamics (MOND) or Milgromian dynamics in current usage.

Moreover, the boost factor of about 1.4 is correctly predicted by a MOND-type Lagrangian theory of gravity called AQUAL, proposed by Milgrom and the late physicist Jacob Bekenstein. What is remarkable is that the correct boost factor requires the external field effect from the Milky Way galaxy that is a unique prediction of MOND-type modified gravity. Thus, what the wide binary data show are not only the breakdown of Newtonian dynamics but also the manifestation of the external field effect of modified gravity.

On the results, Chae says, “It seems impossible that a conspiracy or unknown systematic can cause these acceleration-dependent breakdown of the standard gravity in agreement with AQUAL. I have examined all possible systematics as described in the rather long paper. The results are genuine. I foresee that the results will be confirmed and refined with better and larger data in the future. I have also released all my codes for the sake of transparency and to serve any interested researchers.”

Unlike galactic rotation curves in which the observed boosted accelerations can, in principle, be attributed to dark matter in the Newton-Einstein standard gravity, wide binary dynamics cannot be affected by it even if it existed. The standard gravity simply breaks down in the weak acceleration limit in accordance with the MOND framework.

Implications of wide binary dynamics are profound in astrophysics, theoretical physics, and cosmology. Anomalies in Mercury’s orbits observed in the nineteenth century eventually led to Einstein’s general relativity.

Now anomalies in wide binaries require a new theory extending general relativity to the low acceleration MOND limit. Despite all the successes of Newton’s gravity, general relativity is needed for relativistic gravitational phenomena such as black holes and gravitational waves. Likewise, despite all the successes of general relativity, a new theory is needed for MOND phenomena in the weak acceleration limit. The weak-acceleration catastrophe of gravity may have some similarity to the ultraviolet catastrophe of classical electrodynamics that led to quantum physics.

Wide binary anomalies are a disaster to the standard gravity and cosmology that rely on dark matter and dark energy concepts. Because gravity follows MOND, a large amount of dark matter in galaxies (and even in the universe) are no longer needed. This is also a big surprise to Chae who, like typical scientists, “believed in” until a few years ago.

A new revolution in physics seems now under way. Milgrom says, “Chae’s finding is a result of a very involved analysis of cutting-edge data, which, as far as I can judge, he has performed very meticulously and carefully. But for such a far-reaching finding—and it is indeed very far reaching—we require confirmation by independent analyses, preferably with better future data.”

“If this anomaly is confirmed as a breakdown of Newtonian dynamics, and especially if it indeed agrees with the most straightforward predictions of MOND, it will have enormous implications for astrophysics, cosmology, and for fundamental physics at large.”

Xavier Hernandez, professor at UNAM in Mexico who first suggested wide binary tests of gravity a decade ago, says, “It is exciting that the departure from Newtonian gravity that my group has claimed for some time has now been independently confirmed, and impressive that this departure has for the first time been correctly identified as accurately corresponding to a detailed MOND model. The unprecedented accuracy of the Gaia satellite, the large and meticulously selected sample Chae uses and his detailed analysis, make his results sufficiently robust to qualify as a discovery.”

Pavel Kroupa, professor at Bonn University and at Charles University in Prague, has come to the same conclusions concerning the law of gravitation. He says, “With this test on wide binaries as well as our tests on open star clusters nearby the sun, the data now compellingly imply that gravitation is Milgromian rather than Newtonian. The implications for all of astrophysics are immense.”

More information: Kyu-Hyun Chae, Breakdown of the Newton–Einstein Standard Gravity at Low Acceleration in Internal Dynamics of Wide Binary Stars, The Astrophysical Journal (2023). DOI: 10.3847/1538-4357/ace101

Source: Smoking-gun evidence for modified gravity at low acceleration from Gaia observations of wide binary stars

China floats rules for facial recognition technology – they are good and be great if the govt was bound by them too!

China has released draft regulations to govern the country’s facial recognition technology that include prohibitions on its use to analyze race or ethnicity.

According to the the Cyberspace Administration of China(CAC), the purpose is to “regulate the application of face recognition technology, protect the rights and interests of personal information and other personal and property rights, and maintain social order and public safety” as outlined by a smattering of data security, personal information, and network laws.

The draft rules, which are open for comments until September 7, include some vague directives not to use face recognition technology to disrupt social order, endanger national security, or infringe on the rights of individuals and organizations.

The rules also state that facial recognition tech must be used only when there is a specific purpose and sufficient necessity, strict protection measures are taken, and only when non-biometric measures won’t do.

It makes requirements to obtain consent before processing face information, except for cases where it’s not required, which The Reg assumes means for individuals such as prisoners and in instances of national security. Parental or guardian consent is needed for those under the age of 14.

Building managers can’t require its use to enter and exit property – they must provide alternative measures of verifying a personal identity for those who want it.

It also can’t be leaned into for “major personal interests” such as social assistance and real estate disposal. For that, manual verification of personal identity must be used with facial recognition used only as an auxiliary means of verifying personal identity.

And collecting images for internal management should only be done in a reasonably sized area.

In businesses like hotels, banks, airports, art galleries, and more, the tech should not be used to verify personal identity. If the individual chooses to link their identity to the image, they should be informed either verbally or in writing and provide consent.

Collecting images is also not allowed in private spaces like hotel rooms, public bathrooms, and changing rooms.

Furthermore, those using facial surveillance techniques must display reminder signs, and personal images along with identification information must also be kept confidential, and only anonymized data may be saved.

Under the draft regs, those that store face information of more than 10,000 people must register with a local branch of the CAC within 30 working days.

Most interesting, however, is Article 11, which, when translated from Chinese via automated tools, reads:

No organization or individual shall use face recognition technology to analyze personal race, ethnicity, religion, sensitive personal information such as beliefs, health status, social class, etc.

The CAC does not say if the Chinese Communist Party counts as an “organization.”

Human rights groups have credibly asserted that Uyghurs are routinely surveilled using facial recognition technology, in addition to being incarcerated, required to perform forced labor, re-educated to abandon their beliefs and cultural practices, and may even be subjected to sterilization campaigns.

Just last month, physical security monitoring org IPVM reported it came into possession of a contract between China-based Hikvision and Hainan Province’s Chengmai County for $6 million worth of cameras that could detect whether a person was ethnically Uyghur using minority recognition technology.

Hikvision denied the report and said it last provided such functionality in 2018.

Beyond facilitating identification of Uyghurs, it’s clear the cat is out of the bag when it comes to facial recognition technology in China by both government and businesses alike. Local police use it to track down criminals and its use feeds into China’s social credit system.

“‘Sky Net,’ a facial recognition system that can scan China’s population of about 1.4 billion people in a second, is being used in 16 Chinese cities and provinces to help police crackdown on criminals and improve security,” said state-sponsored media in 2018.

Regardless, the CAC said those violating the new draft rules once passed would be held to criminal and civil liability.

Source: China floats rules for facial recognition technology • The Register

Nuclear Fusion Scientists Successfully Recreate Net Energy Gain

[…] Reuters reports that scientists with the Lawrence Livermore National Laboratory’s National Ignition Facility in California repeated a fusion ignition reaction. The lab’s first breakthrough was announced by the U.S. Department of Energy in December. While the previous experiment produced net energy gain, a spokesperson from the lab told the outlet that this second experiment, conducted on July 30, produced an even higher energy yield. While the laboratory called the experiment a success, results from the test are still being analyzed.

[…]

While fusion reactions are a staple in physics, scientists previously had to grapple with the notion that they required more energy in than they produced, making the net energy gain in both reactions a noteworthy result. The Department of Energy revealed in its December announcement that the fusion test conducted by the laboratory at that time required 2 megajoules of energy while it produced 3 megajoules of energy. The previous fusion experiment conducted at the National Ignition Facility used 192 lasers focused on a peppercorn-sized target. Those lasers create temperatures as high as 100 million degrees Fahrenheit and pressures of over 100 billion Earth atmospheres in order to induce a fusion reaction in the target.

[…]

 

Source: Nuclear Fusion Scientists Successfully Recreate Net Energy Gain

AI listens to keyboards on video conferences – decodes passwords

[…] a new paper from the UK that shows how researchers trained an AI to decode keystrokes from noise on conference calls.

The researchers point out that people don’t expect sound-based exploits. The paper reads, “For example, when typing a password, people will regularly hide their screen but will do little to obfuscate their keyboard’s sound.”

The technique uses the same kind of attention network that makes models like ChatGPT so powerful. It seems to work well, as the paper claims a 97% peak accuracy over both a telephone or Zoom. In addition, where the model was wrong, it tended to be close, identifying an adjacent keystroke instead of the correct one. This would be easy to correct for in software, or even in your brain as infrequent as it is. If you see the sentence “Paris im the s[ring,” you can probably figure out what was really typed.

[…]

Source: Noisy Keyboards Sink Ships | Hackaday

North Korean hackers put backdoors in Russian hypersonic missile maker computers

Reuters found cyber-espionage teams linked to the North Korean government, which security researchers call ScarCruft and Lazarus, secretly installed stealthy digital backdoors into systems at NPO Mashinostroyeniya, a rocket design bureau based in Reutov, a small town on the outskirts of Moscow.

Reuters could not determine whether any data was taken during the intrusion or what information may have been viewed.

[…]

Source: North Korean hackers stole secrets of Russian hypersonic missile maker – EURACTIV.com

Scientists observe first evidence of ‘quantum superchemistry’ in the laboratory

A team from the University of Chicago has announced the first evidence for “quantum superchemistry”—a phenomenon where particles in the same quantum state undergo collective accelerated reactions. The effect had been predicted, but never observed in the laboratory.

[…]

Chin’s group is experienced with herding atoms into quantum states, but molecules are larger and much more complex than atoms—so the group had to invent new techniques to wrangle them.

In the experiments, the scientists cooled down cesium atoms and coaxed them into the same quantum state. Next, they watched as the atoms reacted to form molecules.

In ordinary chemistry, the would collide, and there’s a probability for each collision to form a molecule. However, predicts that atoms in a quantum state perform actions collectively instead.

[…]

One consequence is that the reaction happens faster than it would under ordinary conditions. In fact, the more atoms in the system, the faster the reaction happens.

Another consequence is that the final molecules share the same molecular state. Chin explained that the same molecules in different states can have different physical and —but there are times when you want to create a batch of molecules in a specific state. In traditional chemistry, you’re rolling the dice. “But with this technique, you can steer the molecules into an identical state,” he said.

[…]

More information: Zhendong Zhang et al, Many-body chemical reactions in a quantum degenerate gas, Nature Physics (2023). DOI: 10.1038/s41567-023-02139-8

Source: Scientists observe first evidence of ‘quantum superchemistry’ in the laboratory

What? AI-Generated Art Banned from Future Dungeons & Dragons Books After “Fan Uproar” (Or ~1600 tweets about it)

A Dungeons & Dragons expansion book included AI-generated artwork. Fans on Twitter spotted it before the book was even released (noting, among other things, a wolf with human feet). An embarrassed representative for Wizards of the Coast then tweeted out an announcement about new guidelines stating explicitly that “artists must refrain from using AI art generation as part of their creation process for developing D&D art.” GeekWire reports: The artist in question, Ilya Shkipin, is a California-based painter, illustrator, and operator of an NFT marketplace, who has worked on projects for Renton, Wash.-based Wizards of the Coast since 2014. Shkipin took to Twitter himself on Friday, and acknowledged in several now-deleted tweets that he’d used AI tools to “polish” several original illustrations and concept sketches. As of Saturday morning, Shkipin had taken down his original tweets and announced that the illustrations for Glory of the Giants are “going to be reworked…”

While the physical book won’t be out until August 15, the e-book is available now from Wizards’ D&D Beyond digital storefront.
Wizards of the Coast emphasized this won’t happen again. About this particular incident, they noted “We have worked with this artist since 2014 and he’s put years of work into books we all love. While we weren’t aware of the artist’s choice to use AI in the creation process for these commissioned pieces, we have discussed with him, and he will not use AI for Wizards’ work moving forward.”

GeekWire adds that the latest D&D video game, Baldur’s Gate 3, “went into its full launch period on Tuesday. Based on metrics such as its player population on Steam, BG3 has been an immediate success, with a high of over 709,000 people playing it concurrently on Saturday afternoon.”

Source: AI-Generated Art Banned from Future ‘Dungeons & Dragons’ Books After Fan Uproar – Slashdot

Really? 1600 tweets about this is considered an “uproar” and was enough to change policy into anti-AI? So if you actually look at the pictures, only the wolf with human feet was strange and the rest of the comments weren’t in my eyes. Welcome to life – we have AI’s now and people are going to use them. They are going to save artists loads of time and allow them to create really really cool stuff… like these pictures!

Come on Wizards of the Coast, don’t be luddites.

MIT Boffins Build Battery Alternative Out of Cement, Carbon Black, and Water

Long-time Slashdot reader KindMind shares a report from The Register: Researchers at MIT claim to have found a novel new way to store energy using nothing but cement, a bit of water, and powdered carbon black — a crystalline form of the element. The materials can be cleverly combined to create supercapacitors, which could in turn be used to build power-storing foundations of houses, roadways that could wirelessly charge vehicles, and serve as the foundation of wind turbines and other renewable energy systems — all while holding a surprising amount of energy, the team claims. According to a paper published in the Proceedings of the National Academy of Sciences, 45 cubic meters of the carbon-black-doped cement could have enough capacity to store 10 kilowatt-hours of energy — roughly the amount an average household uses in a day. A block of cement that size would measure about 3.5 meters per side and, depending on the size of the house, the block could theoretically store all the energy an off-grid home using renewables would need.” […]

Just three percent of the mixture has to be carbon black for the hardened cement to act as a supercapacitor, but the researchers found that a 10 percent carbon black mixture appears to be ideal. Beyond that ratio, the cement becomes less stable — not something you want in a building or foundation. The team notes that non-structural use could allow higher concentrations of carbon black, and thus higher energy storage capacity. The team has only built a tiny one-volt test platform using its carbon black mix, but has plans to scale up to supercapacitors the same size as a 12-volt automobile battery — and eventually to the 45 cubic meter block. Along with being used for energy storage, the mix could also be used to provide heat — by applying electricity to the conductive carbon network encased in the cement, MIT noted.
As Science magazine puts it, “Tesla’s Powerwall, a boxy, wall-mounted, lithium-ion battery, can power your home for half a day or so. But what if your home was the battery?”

Source: MIT Boffins Build Battery Alternative Out of Cement, Carbon Black, and Water – Slashdot

Will Browsers Be Required By Law To Stop You From Visiting Infringing Sites?

Mozilla’s Open Policy & Advocacy blog has news about a worrying proposal from the French government:

In a well-intentioned yet dangerous move to fight online fraud, France is on the verge of forcing browsers to create a dystopian technical capability. Article 6 (para II and III) of the SREN Bill would force browser providers to create the means to mandatorily block websites present on a government provided list.

The post explains why this is an extremely dangerous approach:

A world in which browsers can be forced to incorporate a list of banned websites at the software-level that simply do not open, either in a region or globally, is a worrying prospect that raises serious concerns around freedom of expression. If it successfully passes into law, the precedent this would set would make it much harder for browsers to reject such requests from other governments.

If a capability to block any site on a government blacklist were required by law to be built in to all browsers, then repressive governments would be given an enormously powerful tool. There would be no way around that censorship, short of hacking the browser code. That might be an option for open source coders, but it certainly won’t be for the vast majority of ordinary users. As the Mozilla post points out:

Such a move will overturn decades of established content moderation norms and provide a playbook for authoritarian governments that will easily negate the existence of censorship circumvention tools.

It is even worse than that. If such a capability to block any site were built in to browsers, it’s not just authoritarian governments that would be rubbing their hands with glee: the copyright industry would doubtless push for allegedly infringing sites to be included on the block list too. We know this, because it has already done it in the past, as discussed in Walled Culture the book (free digital versions).

Not many people now remember, but in 2004, BT (British Telecom) caused something of a storm when it created CleanFeed:

British Telecom has taken the unprecedented step of blocking all illegal child pornography websites in a crackdown on abuse online. The decision by Britain’s largest high-speed internet provider will lead to the first mass censorship of the web attempted in a Western democracy.

Here’s how it worked:

Subscribers to British Telecom’s internet services such as BTYahoo and BTInternet who attempt to access illegal sites will receive an error message as if the page was unavailable. BT will register the number of attempts but will not be able to record details of those accessing the sites.

The key justification for what the Guardian called “the first mass censorship of the web attempted in a Western democracy” was that it only blocked illegal child sexual abuse material Web sites. It was therefore an extreme situation requiring an exceptional solution. But seven years later, the copyright industry were able to convince a High Court judge to ignore that justification, and to take advantage of CleanFeed to block a site, Newzbin 2, that had nothing to do with child sexual abuse material, and therefore did not require exceptional solutions:

Justice Arnold ruled that BT must use its blocking technology CleanFeed – which is currently used to prevent access to websites featuring child sexual abuse – to block Newzbin 2.

Exactly the logic used by copyright companies to subvert CleanFeed could be used to co-opt the censorship capabilities of browsers with built-in Web blocking lists. As with CleanFeed, the copyright industry would doubtless argue that since the technology already exists, why not to apply it to tackling copyright infringement too?

That very real threat is another reason to fight this pernicious, misguided French proposal. Because if it is implemented, it will be very hard to stop it becoming yet another technology that the copyright world demands should be bent to its own selfish purposes.

Source: Will Browsers Be Required By Law To Stop You From Visiting Infringing Sites? | Techdirt

Very scary indeed

Academic Book About Emojis Can’t Include The Emojis It Talks About Because Of Copyright

Jieun Kiaer, an Oxford professor of Korean linguistics, recently published an academic book called Emoji Speak: Communications and Behaviours on Social Media. As you can tell from the name, it’s a book about emoji, and about how people communicate with them:

Exploring why and how emojis are born, and the different ways in which people use them, this book highlights the diversity of emoji speak. Presenting the results of empirical investigations with participants of British, Belgian, Chinese, French, Japanese, Jordanian, Korean, Singaporean, and Spanish backgrounds, it raises important questions around the complexity of emoji use.

Though emojis have become ubiquitous, their interpretation can be more challenging. What is humorous in one region, for example, might be considered inappropriate or insulting in another. Whilst emoji use can speed up our communication, we might also question whether they convey our emotions sufficiently. Moreover, far from belonging to the youth, people of all ages now use emoji speak, prompting Kiaer to consider the future of our communication in an increasingly digital world.

Sounds interesting enough, but as Goldman highlights with an image from the book, Kiaer was apparently unable to actually show examples of many of the emoji she was discussing due to copyright fears. While companies like Twitter and Google have offered up their own emoji sets under open licenses, not all of them have, and some of the specifics about the variations in how different companies represent different emoji apparently were key to the book.

So, for those, Kiaer actually hired an artist, Loli Kim, to draw similar emoji!

Note on Images of Emojis

The page reads as follows (with paragraph breaks added for readability):

Notes on Images of Emojis

Social media spaces are almost entirely copyright free. They do not follow the same rules as the offline world. For example, on Twitter you can retweet any tweet and add your own opinion. On Instagram, you can share any post and add stickers or text. On TikTok, you can even ‘duet’ a video to add your own video next to a pre-existing one. As much as each platform has its own rules and regulations, people are able to use and change existing material as they wish. Thinking about copyright brings to light barriers that exist between the online and offline worlds. You can use any emoji in your texts, tweets, posts and videos, but if you want to use them in the offline world, you may encounter a plethora of copyright issues.

In writing this book, I have learnt that online and offline exist upon two very different foundations. I originally planned to have plenty of images of emojis, stickers, and other multi-modal resources featured throughout this book, but I have been unable to for copyright reasons. In this moment, I realized how difficult it is to move emojis from the online world into the offline world.

Even though I am writing this book about emojis and their significance in our lives, I cannot use images of them in even an academic book. Were I writing a tweet or Instagram post, however, I would likely have no problem. Throughout this book, I stress that emoji speak in online spaces is a grassroots movement in which there are no linguistic authorities and corporations have little power to influence which emojis we use. Comparatively, in offline spaces, big corporations take ownership of our emoji speak, much like linguistic authorities dictate how we should write and speak properly.

This sounds like something out of a science fiction story, but it is an important fact of which to be aware. While the boundaries between our online and offline words may be blurring, barriers do still exist between them. For this reason, I have had to use an artist’s interpretation of the images that I originally had in mind for this book. Links to the original images have been provided as endnotes, in case readers would like to see them.

Just… incredible. Now, my first reaction to this is that using the emoji and stickers and whatnot in the book seems like a very clear fair use situation. But… that requires a publisher willing to take up the fight (and an insurance company behind the publisher willing to finance that fight). And, that often doesn’t happen. Publishers are notoriously averse to supporting fair use, because they don’t want to get sued.

But, really, this just ends up highlighting (once again) the absolute ridiculousness of copyright in the modern world. No one in their right mind would think that a book about emoji is somehow harming the market for whatever emoji or stickers the professor wished to include. Yet, due to the nature of copyright, here we are. With an academic book about emoji that can’t even include the emoji being spoken about.

Source: Academic Book About Emojis Can’t Include The Emojis It Talks About Because Of Copyright | Techdirt

AI-assisted mammogram cancer screening could cut radiologist workloads in half

A newly published study in the the Lancet Oncology journal has found that the use of AI in mammogram cancer screening can safely cut radiologist workloads nearly in half without risk of increasing false-positive results. In effect, the study found that the AI’s recommendations were on par with those of two radiologists working together.

“AI-supported mammography screening resulted in a similar cancer detection rate compared with standard double reading, with a substantially lower screen-reading workload, indicating that the use of AI in mammography screening is safe,” the study found.

The study was performed by a research team out of Lund University in Sweden and, accordingly, followed 80,033 Swedish women (average age of 54) for just over a year in 2021-2022 . Of the 39,996 patients that were randomly assigned AI-empowered breast cancer screenings, 28 percent or 244 tests returned screen-detected cancers. Of the other 40,024 patients that received conventional cancer screenings, just 25 percent, or 203 tests, returned screen-detected cancers.

Of those extra 41 cancers detected by the AI side, 19 turned out to be invasive. Both the AI-empowered and conventional screenings ran a 1.5 percent false positive rate. Most impressively, radiologists on the the AI side had to look at 36,886 fewer screen readings than their counterparts, a 44 percent reduction in their workload.

[…]

Source: AI-assisted cancer screening could cut radiologist workloads in half | Engadget

Microsoft Comes Under Blistering Criticism For ‘Grossly Irresponsible’ Azure Security

An anonymous reader quotes a report from Ars Technica: Microsoft has once again come under blistering criticism for the security practices of Azure and its other cloud offerings, with the CEO of security firm Tenable saying Microsoft is “grossly irresponsible” and mired in a “culture of toxic obfuscation.” The comments from Amit Yoran, chairman and CEO of Tenable, come six days after Sen. Ron Wyden (D-Ore.) blasted Microsoft for what he said were “negligent cybersecurity practices” that enabled hackers backed by the Chinese government to steal hundreds of thousands of emails from cloud customers, including officials in the US Departments of State and Commerce. Microsoft has yet to provide key details about the mysterious breach, which involved the hackers obtaining an extraordinarily powerful encryption key granting access to a variety of its other cloud services. The company has taken pains ever since to obscure its infrastructure’s role in the mass breach.

On Wednesday, Yoran took to LinkedIn to castigate Microsoft for failing to fix what the company said on Monday was a “critical” issue that gives hackers unauthorized access to data and apps managed by Azure AD, a Microsoft cloud offering for managing user authentication inside large organizations. Monday’s disclosure said that the firm notified Microsoft of the problem in March and that Microsoft reported 16 weeks later that it had been fixed. Tenable researchers told Microsoft that the fix was incomplete. Microsoft set the date for providing a complete fix to September 28.

“To give you an idea of how bad this is, our team very quickly discovered authentication secrets to a bank,” Yoran wrote. “They were so concerned about the seriousness and the ethics of the issue that we immediately notified Microsoft.” He continued: “Did Microsoft quickly fix the issue that could effectively lead to the breach of multiple customers’ networks and services? Of course not. They took more than 90 days to implement a partial fix — and only for new applications loaded in the service.” In response, Microsoft officials wrote: “We appreciate the collaboration with the security community to responsibly disclose product issues. We follow an extensive process involving a thorough investigation, update development for all versions of affected products, and compatibility testing among other operating systems and applications. Ultimately, developing a security update is a delicate balance between timeliness and quality, while ensuring maximized customer protection with minimized customer disruption.” Microsoft went on to say that the initial fix in June “mitigated the issue for the majority of customers” and “no customer action is required.”

In a separate email, Yoran responded: “It now appears that it’s either fixed, or we are blocked from testing. We don’t know the fix, or mitigation, so hard to say if it’s truly fixed, or Microsoft put a control in place like a firewall rule or ACL to block us. When we find vulns in other products, vendors usually inform us of the fix so we can validate it effectively. With Microsoft Azure that doesn’t happen, so it’s a black box, which is also part of the problem. The ‘just trust us’ lacks credibility when you have the current track record.”

Source: Microsoft Comes Under Blistering Criticism For ‘Grossly Irresponsible’ Security – Slashdot

A great example of why a) closed source software is a really bad idea, b) why responsible disclosure is a good idea and c) why cloud is often a bad idea

IBM and NASA open source satellite-image-labeling AI model

IBM and NASA have put together and released Prithvi: an open source foundation AI model that may help scientists and other folks analyze satellite imagery.

The vision transformer model, released under an Apache 2 license, is relatively small at 100 million parameters, and was trained on a year’s worth of images collected by the US space boffins’ Harmonized Landsat Sentinel-2 (HLS) program. As well as the main model, three variants of Prithvi are available, fine-tuned for identifying flooding; wildfire burn scars; and crops and other land use.

Essentially, it works like this: you feed one of the models an overhead satellite photo, and it labels areas in the snap it understands. For example, the variant fine-tuned for crops can point out where there’s probably water, forests, corn fields, cotton fields, developed land, wetlands, and so on.

This collection, we imagine, would be useful for, say, automating the study of changes to land over time – such as tracking erosion from flooding, or how drought and wildfires have hit a region. Big Blue and NASA aren’t the first to do this with machine learning: there are plenty of previous efforts we could cite.

A demo of the crop-classifying Prithvi model can be found here. Provide your own satellite imagery or use one of the examples at the bottom of the page. Click Submit to run the model live.

“We believe that foundation models have the potential to change the way observational data is analyzed and help us to better understand our planet,” Kevin Murphy, chief science data officer at NASA, said in a statement. “And by open sourcing such models and making them available to the world, we hope to multiply their impact.”

Developers can download the models from Hugging Face here.

There are other online demos of Prithvi, such as this one for the variant fine-tuned for bodies of water; this one for detecting wildfire scars; and this one that shows off the model’s ability to reconstruct partially photographed areas.

[…]

Source: IBM and NASA open source satellite-image-labeling AI model • The Register

Couple admit laundering $4B of stolen Bitfinex Bitcoins

Ilya Lichtenstein and Heather Morgan on Thursday pleaded guilty to money-laundering charges related to the 2016 theft of some 120,000 Bitcoins from Hong Kong-based Bitfinex.

The Feds arrested Lichtenstein, 35, and Morgan, 33, in February 2022 following the US government’s tracing of about 95,000 of the stolen BTC – worth about $3.6 billion at the time and $2.8 billion today – to digital wallets controlled by the married couple.

The Justice Department at the time described the seizure as the largest ever and has since recovered an additional $475 million.

[…]

Lichtenstein admitted in court he gained access into Bitfinex’s network using unidentified tools and techniques. According to prosecutors, once inside, Lichtenstein proceeded to initiate more than 2,000 fraudulent transactions that sent 119,754 bitcoin from Bitfinex into a cryptocurrency wallet he controlled.

Thereafter, the Justice Department said, he tried to cover his tracks by deleting access credentials and log files, and then involved Morgan to help launder the stolen funds by transferring them through a maze of financial accounts. At one point Lichtenstein used some of the funds to buy gold coins, which were then buried by Morgan.

An affidavit [PDF] from IRS investigator Christopher Janczewski, which documents the basis of the US government’s case, traces the flow of stolen funds through multiple accounts associated with the defendants.

[…]

 

Source: Couple admit laundering $4B of stolen Bitfinex Bitcoins • The Register

Special License For Supercars Will Be Required In South Australia by 2024

The state of South Australia, home to 1.8 million people, is treading that well-worn path with new laws regulating the use of “ultra high-powered vehicles” on the road.

The issue stems from a fatal crash in 2019 when 15-year-old Sophia Naismith was tragically struck and killed by an out-of-control Lamborghini Huracan driven by Alexander Campbell. After Campbell avoided jail with a suspended sentence last year, community backlash created a political case for change. As covered by Drive.com.au, the government has now implemented a raft of new road laws in response.

The laws designate a new class of “ultra high-powered vehicles” (UHPV). This covers any such vehicle with a power-to-weight ratio of 276 kW/metric tonne (407 horsepower/US ton) and a gross mass under 4.5 tonne (9920 pounds). Roughly 200 models are currently expected to fall into this classification, with buses and motorbikes exempt from the rules. This classification would notably include the Lamborghini Huracan, which boasts a power-to-weight ratio of 292 kW/tonne (431 hp/US ton). For reference, another sports car, the base Chevrolet Corvette, comes in at 242kW/ton and is not subject to these rules. The 670-horsepower Z06 version of that car is, though.

After December 1st, 2024, those wishing to drive a UHPV must hold a special ‘U Class’ license. Obtaining this license requires passing an online course currently in development by the government of the region, based in Adelaide. Furthermore, drivers must have held a regular car or heavy vehicle license for at least three years to be eligible for a U license. There will be no retroactive exemptions, with all current drivers wishing to drive UHPV-class cars required to take the course.

Another major change, as reported by MSN, makes it a criminal offense to disable traction control and other driver aids in an “ultra high-powered vehicle.” Specifically, the rule applies to “anti-lock braking, automated emergency braking, electronic stability control or traction control” systems, but not lane-keeping assists and parking sensors.

Drivers breaking this rule will be subject to penalties of up to $5,000 AUD. However, reasonable defenses include switching off driver aids in conditions where justified, such as if the vehicle is stuck. Similarly, a further defense exists if the driver did not disable the system themselves and was unaware of the situation. They will have to prove that, of course.

Meanwhile, if a driver crashes while in “sports mode” or with traction control disabled, and that incident causes death or serious harm, the driver will be charged with an “aggravated offense” which comes with new harsher penalties. For example, prior to the change, the charge of “aggravated driving without due care causing death” carried a maximum 12-month jail penalty and six-month driving disqualification. That has now been upgraded to seven years in jail and three years of disqualification. This bears a direct relation to Campbell’s crash, which was alleged to have happened indirectly due to the use of the Huracan’s sports mode.

[…]

Source: Special License For Supercars Will Be Required In South Australia by 2024

Whilst I agree with the idea of needing a supercar license, making it an offense to turn off driving aids is a bit sketchy for me…

Tesla Hackers Find ‘Unpatchable’ Jailbreak to Unlock Paid Features for Free

A security researcher along with three PhD students from Germany have reportedly found a way to exploit Tesla’s current AMD-based cars to develop what could be the world’s first persistent “Tesla Jailbreak.”

The team published a briefing ahead of their presentation at next week’s Blackhat 2023. There, they will present a working version of an attack against Tesla’s latest AMD-based media control unit (MCU). According to the researchers, the jailbreak uses an already-known hardware exploit against a component in the MCU, which ultimately enables access to critical systems that control in-car purchases—and perhaps even tricking the car into thinking these purchases are already paid for.

[.,..]

Tesla has started using this well-established platform to enable in-car purchases, not only for additional connectivity features but even for analog features like faster acceleration or rear heated seats. As a result, hacking the embedded car computer could allow users to unlock these features without paying.”

Separately, the attack will allow researchers to extract a vehicle-specific cryptography key that is used to authenticate and authorize a vehicle within Tesla’s service network.

According to the researchers, the attack is unpatchable on current cars, meaning that no matter what software updates are pushed out by Tesla, attackers—or perhaps even DIY hackers in the future—can run arbitrary code on Tesla vehicles as long as they have physical access to the car. Specifically, the attack is unpatchable because it’s not an attack directly on a Tesla-made component, but rather against the embedded AMD Secure Processor (ASP) which lives inside of the MCU.

[…]

Tesla is an offender of something many car owners hate: making vehicles with hardware installed, but locked behind software. For example, the RWD Model 3 has footwell lights installed from the factory, but they are software disabled. Tesla also previously locked the heated steering wheel function and heated rear seats behind a software paywall, but eventually began activating it on new cars at no extra cost in 2021. There’s also the $2,000 “Acceleration Boost” upgrade for certain cars that drops a half-second off of the zero to 60 time.

[…]

Source: Tesla Hackers Find ‘Unpatchable’ Jailbreak to Unlock Paid Features for Free

Water-soluble circuit boards could cut carbon footprints by 60 percent

German semiconductor maker Infineon Technologies AG announced that it’s producing a printed circuit board (PCB) that dissolves in water. Sourced from UK startup Jiva Materials, the plant-based Soluboard could provide a new avenue for the tech industry to reduce e-waste as companies scramble to meet climate goals by 2030.

Jiva’s biodegradable PCB is made from natural fibers and a halogen-free polymer with a much lower carbon footprint than traditional boards made with fiberglass composites. A 2022 study by the University of Washington College of Engineering and Microsoft Research saw the team create an Earth-friendly mouse using a Soluboard PCB as its core. The researchers found that the Soluboard dissolved in hot water in under six minutes. However, it can take several hours to break down at room temperature.

In addition to dissolving the PCB fibers, the process makes it easier to retrieve the valuable metals attached to it. “After [it dissolves], we’re left with the chips and circuit traces which we can filter out,” said UW assistant professor Vikram Iyer, who worked on the mouse project.

[…]

Jiva says the board has a 60 percent smaller carbon footprint than traditional PCBs — specifically, it can save 10.5 kg of carbon and 620 g of plastic per square meter of PCB.

[…]

Source: Water-soluble circuit boards could cut carbon footprints by 60 percent | Engadget

AI-enabled brain implant helps spine damaged patient regain feeling and movement

Keith Thomas from New York was involved in a driving accident back in 2020 that injured his spine’s C4 and C5 vertebrae, leading to a total loss in feeling and movement from the chest down. Recently, though, Thomas had been able to move his arm at will and feel his sister hold his hand, thanks to the AI brain implant technology developed by the Northwell Health’s Feinstein Institute of Bioelectronic Medicine.

The research team first spent months mapping his brain with MRIs to pinpoint the exact parts of his brain responsible for arm movements and the sense of touch in his hands. Then, four months ago, surgeons performed a 15-hour procedure to implant microchips into his brain — Thomas was even awake for some parts so he could tell them what sensations he was feeling in his hand as they probed parts of the organ.

While the microchips are inside his body, the team also installed external ports on top of his head. Those ports connect to a computer with the artificial intelligence (AI) algorithms that the team developed to interpret his thoughts and turn them into action. The researchers call this approach “thought-driven therapy,” because it all starts with the patient’s intentions. If he thinks of wanting to move his hand, for instance, his brain implant sends signals to the computer, which then sends signals to the electrode patches on his spine and hand muscles in order to stimulate movement. They attached sensors to his fingertips and palms, as well, to stimulate sensation.

Thanks to this system, he was able to move his arm at will and feel his sister holding his hand in the lab. While he needed to be attached to the computer for those milestones, the researchers say Thomas has shown signs of recovery even when the system is off. His arm strength has apparently “more than doubled” since the study began, and his forearm and wrist could now feel some new sensations. If all goes well, the team’s thought-driven therapy could help him regain more of his sense of touch and mobility.

While the approach has a ways to go, the team behind it is hopeful that it could change the lives of people living with paralysis.

[…]

Source: AI-enabled brain implant helps patient regain feeling and movement | Engadget