Climate crisis will make Europe’s beer cost more and taste worse

Climate breakdown is already changing the taste and quality of beer, scientists have warned.

The quantity and quality of hops, a key ingredient in most beers, is being affected by global heating, according to a study. As a result, beer may become more expensive and manufacturers will have to adapt their brewing methods.

Researchers forecast that hop yields in European growing regions will fall by 4-18% by 2050 if farmers do not adapt to hotter and drier weather, while the content of alpha acids in the hops, which gives beers their distinctive taste and smell, will fall by 20-31%.

“Beer drinkers will definitely see the climate change, either in the price tag or the quality,” said Miroslav Trnka, a scientist at the Global Change Research Institute of the Czech Academy of Sciences and co-author of the study, published in the journal Nature Communications. “That seems to be inevitable from our data.”

Beer, the third-most popular drink in the world after water and tea, is made by fermenting malted grains like barley with yeast. It is usually flavoured with aromatic hops grown mostly in the middle latitudes that are sensitive to changes in light, heat and water.

[…]

Source: Climate crisis will make Europe’s beer cost more and taste worse, say scientists | Europe | The Guardian

Microplastics detected in clouds hanging atop two Japanese mountains

[…]

The clouds around Japan’s Mount Fuji and Mount Oyama contain concerning levels of the tiny plastic bits, and highlight how the pollution can be spread long distances, contaminating the planet’s crops and water via “plastic rainfall”.

The plastic was so concentrated in the samples researchers collected that it is thought to be causing clouds to form while giving off greenhouse gasses.

“If the issue of ‘plastic air pollution’ is not addressed proactively, climate change and ecological risks may become a reality, causing irreversible and serious environmental damage in the future,” the study’s lead author, Hiroshi Okochi, a professor at Waseda University, said in a statement.

The peer-reviewed paper was published in Environmental Chemistry Letters, and the authors believe it is the first to check clouds for microplastics.

[…]

Waseda researchers gathered samples at altitudes ranging between 1,300-3,776 meters, which revealed nine types of polymers, like polyurethane, and one type of rubber. The cloud’s mist contained about 6.7 to 13.9 pieces of microplastics per litre, and among them was a large volume of “water loving” plastic bits, which suggests the pollution “plays a key role in rapid cloud formation, which may eventually affect the overall climate”, the authors wrote in a press release.

That is potentially a problem because microplastics degrade much faster when exposed to ultraviolet light in the upper atmosphere, and give off greenhouse gasses as they do. A high concentration of these microplastics in clouds in sensitive polar regions could throw off the ecological balance, the authors wrote.

The findings highlight how microplastics are highly mobile and can travel long distances through the air and environment. Previous research has found the material in rain, and the study’s authors say the main source of airborne plastics may be seaspray, or aerosols, that are released when waves crash or ocean bubbles burst. Dust kicked up by cars on roads is another potential source, the authors wrote.

Source: Microplastics detected in clouds hanging atop two Japanese mountains

New Fairy Circles Identified at Hundreds of Sites Worldwide

Round discs of dirt known as “fairy circles” mysteriously appear like polka dots on the ground that can spread out for miles. The origins of this phenomenon has intrigued scientists for decades, with recent research indicating that they may be more widespread than previously thought.

AI Model Used to Identify New Fairy Circles Worldwide N. Juergens:AAAS:Science
Fairy circles in NamibRand Nature Reserve in Namibia; Photo: N. Juergens/AAAS/Science

Fairy circles have previously been sighted only in Southern Africa’s Namid Desert and the outback of Western Australia. A new study was recently published which used artificial intelligence to identify vegetation patterns resembling fairy circles in hundreds of new locations across 15 countries on 3 continents.

Published in the journal Proceedings of the National Academy of Sciences, the new survey analyzed datasets containing high-resolution satellite images of drylands and arid ecosystems with scant rainfall from around the world.

Examining the new findings may help scientists understand fairy circles and the origins of their formations on a global scale. The researchers searched for patterns resembling fairy circles using a neural network or a type of AI that processes information in a manner that’s similar to the human brain.

“The use of artificial intelligence based models on satellite imagery is the first time it has been done on a large scale to detect fairy-circle like patterns,” said lead study author Dr. Emilio Guirado, a data scientist with the Multidisciplinary Institute for Environmental Studies at the University of Alicante in Spain.

Fairy Circles Identified at Sites Worldwide Courtesy Dr. Stephan Getzin
Drone flies over the NamibRand Nature Reserve; Photo: Dr. Stephan Getzin

The scientists first trained the neural network to recognize fairy circles by inputting more than 15,000 satellite images taken over Nambia and Australia. Then they provided the AI dataset with satellite views of nearly 575,000 plots of land worldwide, each measuring approximately 2.5 acres.

The neural network scanned vegetation in those images and identified repeating circular patterns that resembled fairy circles, evaluating the circles’ shapes, sizes, locations, pattern densities, and distribution. The output was then reviewed by humans to double-check the work of the neural network.

“We had to manually discard some artificial and natural structures that were not fairy circles based on photo-interpretation and the context of the area,” Guirado explained.

The results of the study showed 263 dryland locations that contained circular patterns similar to the fairy circles in Namibia and Australia. The spots were located in Africa, Madagascar, Midwestern Asia, and both central and Southwest Australia.

Researchers Discover New Fairy Circles Around the World Thomas Dressler:imageBROKER:Shutterstock
New fairy circles identified around the world; Photo: Dressler/imageBROKER/Shutterstock

The authors of the study also collected environmental data where the new circles were identified in hopes that this may indicate what causes them to form. They determined that fairy circle-like patterns were most likely to occur in dry, sandy soils that were high-alkaline and low in nitrogen.  They also found that these patterns helped stabilize ecosystems, increasing an area’s resistance to disturbances such as extreme droughts and floods.

There are many different theories among experts regarding the creation of fairy circles. They may be caused by certain climate conditions, self-organization in plants, insect activity, etc. The authors of the new study are optimistic that the new findings will help unlock the mysteries of this unique phenomenon.

Source: New Fairy Circles Identified at Hundreds of Sites Worldwide – TOMORROW’S WORLD TODAY®

Priming and Placebo effects shape how humans interact with AI

The preconceived notions people have about AI — and what they’re told before they use it — mold their experiences with these tools in ways researchers are beginning to unpack.

Why it matters: As AI seeps into medicine, news, politics, business and a host of other industries and services, human psychology gives the technology’s creators levers they can use to enhance users’ experiences — or manipulate them.

What they’re saying: “AI is only half of the human-AI interaction,” says Ruby Liu, a researcher at the MIT Media Lab.

  • The technology’s developers “always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned,” says Pattie Maes, who directs the MIT Media Lab’s Fluid Interfaces Group.
  • “But we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don’t just depend on the AI and the quality of the AI. It depends on how the human responds to the AI,” she says.

What’s new: A pair of studies published this week looked at how much a person’s expectations about AI impacted their likelihood to trust it and take its advice.

A strong placebo effect works to shape what people think of a particular AI tool, one study revealed.

  • Participants who were about to interact with a mental health chatbot were told the bot was caring, was manipulative or was neither and had no motive.
  • After using the chatbot, which is based on OpenAI’s generative AI model GPT-3, most people primed to believe the AI was caring said it was. Participants who’d been told the AI had no motives said it didn’t. But they were all interacting with the same chatbot.
  • Only 24% of the participants who were told the AI was trying to manipulate them into buying its service said they perceived it as malicious.
  • That may be a reflection of humans’ positivity bias and that they may “want to evaluate [the AI] for themselves,” says Pat Pataranutaporn, a researcher at the MIT Media Lab and co-author of the new study published this week in Nature Machine Learning.
  • Participants who were told the chatbot was benevolent also said they perceived it to be more trustworthy, empathetic and effective than participants primed to believe it was neutral or manipulative.
  • The AI placebo effect has been described before, including in a study in which people playing a word puzzle game rated it better when told AI was adjusting its difficulty level (it wasn’t — there wasn’t an AI involved).

The intrigue: It wasn’t just people’s perceptions that were affected by their expectations.

  • Analyzing the words in conversations people had with the chatbot, the researchers found those who were told the AI was caring had increasingly positive conversations with the chatbot, whereas the interaction with the AI became more negative with people who’d been told it was trying to manipulate them.

For some tasks, AI is perceived to be more objective and trustworthy — a perception that may cause people to prefer an algorithm’s advice.

  • In another study published this week in Scientific Reports, researchers found that preference can lead people to inherit an AI’s errors.
  • Psychologists Lucía Vicente and Helena Matute from Deusto University in Bilbao, Spain, found that participants asked to perform a simulated medical diagnosis task with the help of an AI followed the AI’s advice, even when it was mistaken — and kept making those mistakes even after the AI was taken away.
  • “It is going to be very important that humans working with AI have not only the knowledge of how AI works … but also the time to oppose the advice of the AI — and the motivation to do it,” Matute says.

Yes, but: Both studies looked at one-off interactions between people and AI, and it’s unclear whether using a system day in and day out will change the effect the researchers describe.

The big picture: How people are introduced to AI and how it is depicted in pop culture, marketed and branded can be powerful determiners of how AI is adopted and ultimately valued, researchers said.

  • In previous work, the MIT Media Lab team showed that if someone has an “AI-generated virtual instructor” that looks like someone they admire, they are more motivated to learn and more likely to say the AI is a good teacher (even though their test scores didn’t necessarily improve).
  • Meta last month announced it was launching AI characters played by celebrities — like tennis star Naomi Osaka as an “anime-obsessed Sailor Senshi in training” and Tom Brady as a “wisecracking sports debater who pulls no punches.”
  • “There are just a lot of implications that come with the interface of an AI — how it’s portrayed, how it interacts with you, what it looks like, how it talks to you, what voice it has, what language it uses,” Maes says.

The placebo effect will likely be a “big challenge in the future,” says Thomas Kosch, who studies human-AI interaction at Humboldt University in Berlin. For example, someone might be more careless when they think an AI is helping them drive a car, he says. His own work also shows people take more risks when they think they are supported by an AI.

What to watch: The studies point to the possible power of priming people to have lower expectations of AI — but maybe only so far.

  • A practical lesson is “we should err on the side of portraying these systems and talking about these systems as not completely correct or accurate … so that people come with an attitude of ‘I’m going to make up my own mind about this system,'” Maes says.

Source: Placebo effect shapes how we see AI

News organizations blocking OpenAI

Ben Welsh has a running list of the news organizations blocking OpenAI crawlers:

In total, 532 of 1,147 news publishers surveyed by the homepages.news archive have instructed OpenAI, Google AI or the non-profit Common Crawl to stop scanning their sites, which amounts to 46.4% of the sample.

The three organizations systematically crawl web sites to gather the information that fuels generative chatbots like OpenAI’s ChatGPT and Google’s Bard. Publishers can request that their content be excluded by opting out via the robots.txt convention.

Source: News organizations blocking OpenAI

Which reduces the value of AIs. It used to be the web was open for all, with information you could use as you liked. News organisations often fail to see value in AI but are scared that their jobs will be taken by AIs instead of enhanced. So they try to wreck the AIs, a bit like saboteurs and luddites. A real impediment to growth

Museum Collection Of Historical TV Culture At Risk Due To Copyright Takedowns

[…]

the informal nature of their collections means that they are exposed to serious threats from copyright, as the recent experience of The Museum of Classic Chicago Television makes clear. The Museum explains why it exists:

The Museum of Classic Chicago Television (FuzzyMemoriesTV) is constantly searching out vintage material on old videotapes saved in basements or attics, or sold at flea markets, garage sales, estate sales and everywhere in between. Some of it would be completely lost to history if it were not for our efforts. The local TV stations have, for the most part, regrettably done a poor job at preserving their history. Tapes were very expensive 25-30 years ago and there also was a lack of vision on the importance of preserving this material back then. If the material does not exist on a studio master tape, what is to be done? Do we simply disregard the thousands of off-air recordings that still exist holding precious “lost” material? We believe this would be a tragic mistake.

Dozens of TV professionals and private individuals have donated to the museum their personal copies of old TV programmes made in the 1970s and 1980s, many of which include rare and otherwise unavailable TV advertisements that were shown as part of the broadcasts. In addition to the main Museum of Classic Chicago Television site, there is also a YouTube channel with videos. However, as TorrentFreak recounts, the entire channel was under threat because of copyright takedown requests:

In a series of emails starting Friday and continuing over the weekend, [the museum’s president and lead curator] Klein began by explaining his team’s predicament, one that TorrentFreak has heard time and again over the past few years. Acting on behalf of a copyright owner, in this case Sony, India-based anti-piracy company Markscan hit the MCCTv channel with a flurry of copyright claims. If these cannot be resolved, the entire project may disappear.

One issue is that Klein was unable to contact Markscan to resolve the problem directly. He is quoted by TorrentFreak as saying: “I just need to reach a live human being to try to resolve this without copyright strikes. I am willing to remove the material manually to get the strikes reversed.”

Once the copyright enforcement machine is engaged, it can be hard to stop. As Walled Culture the book (free digital versions available) recounts, there are effectively no penalties for unreasonable or even outright false claims. The playing field is tipped entirely in the favour of the copyright world, and anyone that is targeted using one of the takedown mechanisms is unlikely to be able to do much to contest them, unless they have good lawyers and deep pockets. Fortunately, in this case, an Ars Technica article on the issue reported that:

Sony’s copyright office emailed Klein after this article was published, saying it would “inform MarkScan to request retractions for the notices issued in response to the 27 full-length episode postings of Bewitched” in exchange for “assurances from you that you or the Fuzzy Memories TV Channel will not post or re-post any infringing versions from Bewitched or other content owned or distributed by SPE [Sony Pictures Entertainment] companies.”

That “concession” by Sony highlights the main problem here: the fact that a group of public-spirited individuals trying to preserve unique digital artefacts must live with the constant threat of copyright companies taking action against them. Moreover, there is also the likelihood that some of their holdings will have to be deleted as a result of those legal threats, despite the material’s possible cultural value or the fact that it is the only surviving copy. No one wins in this situation, but the purity of copyright must be preserved at all costs, it seems.

[…]

Source: Museum Collection Of Historical TV Culture At Risk Due To Copyright Takedowns | Techdirt

MGM Resorts cyberattack to cost $100 million

MGM Resorts has admitted that the cyberattack it suffered in September will likely cost the company at least $100 million.

The effects of the attack are expected to make a substantial dent in the entertainment giant’s third-quarter earnings and still have a noticeable impact in its Q4 too, although this is predicted to be “minimal.”

According to an 8K filing with the Securities and Exchange Commission (SEC) on Thursday, MGM Resorts said less than $10 million has also been spent on “one-time expenses” such as legal and consultancy fees, and the cost of bringing in third-party experts to handle the incident response.

These are the current estimates for the total costs incurred by the attack, which took slot machines to the sword and borked MGM’s room-booking systems, among other things, but the company admitted the full scope of costs has yet to be determined.

The good news is that MGM expects its cyber insurance policy to cover the financial impact of the attack.

The company also expects to fill its rooms to near-normal levels starting this month. September’s occupancy levels took a hit – 88 percent full compared to 93 percent at the same time last year – but October’s occupancy is forecast to be down just 1 percent and November is poised to deliver record numbers thanks to the Las Vegas Formula 1 event.

[…]

MGM Resorts confirmed personal data belonging to customers had been stolen during the course of the intrusion. Those who became customers before March 2019 may be affected.

Stolen data includes social security numbers, driving license numbers, passport numbers, and contact details such as names, phone numbers, email addresses, postal addresses, as well as gender and dates of birth.

At this time, there is no evidence to suggest that financial information including bank numbers and cards were compromised, and passwords are also believed to be unaffected.

[…]

Adam Marrè, CISO at cybersecurity outfit Arctic Wolf, told The Register: “When looking at the total cost of a breach, such as the one which impacted MGM, many factors can be taken into account. This can include a combination of revenue lost for downtime, extra hours worked for remediation, tools that may have been purchased to deal with the issue, outside incident response help, setting up and operating a hotline for affected people, fixing affected equipment, purchasing credit monitoring, and sending physical letters to victims. Even hiring an outside PR firm to help with crisis messaging. When you add up everything, $100 million does not sounds like an unrealistic number for organization like MGM.

[…]

Source: MGM Resorts cyberattack to cost $100 million • The Register

23andMe DNA site scraping incident leaked data on 1.3 million users

Genetic testing giant 23andMe confirmed that a data scraping incident resulted in hackers gaining access to sensitive user information and selling it on the dark web.

The information of nearly 7 million 23andMe users was offered for sale on a cybercriminal forum this week. The information included origin estimation, phenotype, health information, photos, identification data and more. 23andMe processes saliva samples submitted by customers to determine their ancestry.

When asked about the post, the company initially denied that the information was legitimate, calling it a “misleading claim” in a statement to Recorded Future News.

The company later said it was aware that certain 23andMe customer profile information was compiled through unauthorized access to individual accounts that were signed up for the DNA Relative feature — which allows users to opt in for the company to show them potential matches for relatives.

[…]

When pressed on how compromising a handful of user accounts would give someone access to millions of users, the spokesperson said the company does not believe the threat actor had access to all of the accounts but rather gained unauthorized entry to a much smaller number of 23andMe accounts and scraped data from their DNA Relative matches.

The spokesperson declined to confirm the specific number of customer accounts affected.

Anyone who has opted into DNA Relatives can view basic profile information of others who make their profiles visible to DNA Relative participants, a spokesperson said.

Users who are genetically related can access ancestry information, which is made clear to users when they create their DNA Relatives profile, the spokesperson added.

[…]

A researcher approached Recorded Future News after examining the leaked database and found that much of it looked real. The researcher spoke on condition of anonymity because he found the information of his wife and several of her family members in the leaked data set. He also found other acquaintances and verified that their information was accurate.

The researcher downloaded two files from the BreachForums post and found that one had information on 1 million 23andMe users of Ashkenazi heritage. The other file included data on more than 300,000 users of Chinese heritage.

The data included profile and account ID numbers, names, gender, birth year, maternal and paternal genetic markers, ancestral heritage results, and data on whether or not each user has opted into 23andme’s health data.

“It appears the information has been scraped from user profiles which are only supposed to be shared between DNA Matches. So although this particular leak does not contain genomic sequencing data, it’s still data that should not be available to the public,” the researcher said.

“23andme seems to think this isn’t a big deal. They keep telling me that if I don’t want this info to be shared, I should not opt into the DNA relatives feature. But that’s dismissing the importance of this data which should only be viewable to DNA relatives, not the public. And the fact that someone was able to scrape this data from 1.3 million users is concerning. The hacker allegedly has more data that they have not released yet.”

The researcher added that he discovered another issue where someone could enter a 23andme profile ID, like the ones included in the leaked data set, into their URL and see someone’s profile.

The data available through this only includes profile photos, names, birth years and location but does not include test results.

“It’s very concerning that 23andme has such a big loophole in their website design and security where they are just freely exposing peoples info just by typing a profile ID into the URL. Especially for a website that deals with people’s genetic data and personal information. What a botch job by the company,” the researcher said.

[…]

The security policies of genetic testing companies like 23andMe have faced scrutiny from regulators in recent weeks. Three weeks ago, genetic testing firm 1Health.io agreed to pay the Federal Trade Commission (FTC) a $75,000 fine to resolve allegations that it failed to secure sensitive genetic and health data, retroactively overhauled its privacy policy without notifying and obtaining consent from customers whose data it had obtained, and tricked customers about their ability to delete their data.

Source: 23andMe scraping incident leaked data on 1.3 million users of Ashkenazi and Chinese descent

ICE, CBP, Secret Service All Illegally Used Smartphone Location Data

In a bombshell report, an oversight body for the Department of Homeland Security (DHS) found that Immigration and Customs Enforcement (ICE), Customs and Border Enforcement (CBP), and the Secret Service all broke the law while using location data harvested from ordinary apps installed on smartphones. In one instance, a CBP official also inappropriately used the technology to track the location of coworkers with no investigative purpose. For years U.S. government agencies have been buying access to location data through commercial vendors, a practice which critics say skirts the Fourth Amendment requirement of a warrant. During that time, the agencies have typically refused to publicly explain the legal basis on which they based their purchase and use of the data. Now, the report shows that three of the main customers of commercial location data broke the law while doing so, and didn’t have any supervisory review to ensure proper use of the technology. The report also recommends that ICE stop all use of such data until it obtains the necessary approvals, a request that ICE has refused.

The report, titled “CBP, ICE, and Secret Service Did Not Adhere to Privacy Policies or Develop Sufficient Policies Before Procuring and Using Commercial Telemetry Data,” is dated September 28, 2023, and comes from Joseph V. Cuffari, the Inspector General for DHS. The report was originally marked as “law enforcement sensitive,” but the Inspector General has now released it publicly.

Source: ICE, CBP, Secret Service All Illegally Used Smartphone Location Data – Slashdot

EPIC urges FTC to investigate Grindr’s data practices

On Wednesday, EPIC filed a complaint with the US government watchdog over Grindr’s “apparent failure to safeguard users’ sensitive personal data.” This includes both present and past users who have since deleted their accounts, according to the complaint. Despite promising in its privacy policy to delete personal info if customers remove their account, Grindr allegedly retained and disclosed some of this data to third parties.

Considering that people trust the dating app with a ton of very sensitive information — this includes their sexual preferences, self-reported HIV status, chat history, photos including nudes, and location information — “learning that Grindr breaks the promises it makes to users would likely affect a consumer’s decision regarding whether to use Grindr,” the complaint states [PDF].

Grindr, for its part, says privacy is of the uppermost importance to it, and that these “unfounded” claims stem from allegations made by a disgruntled ex-worker. So that’s all right then.

“Privacy is a top priority for Grindr and the LGBTQ+ community we serve, and we have adopted industry-leading privacy practices and tools to protect and empower our users,” a spokesperson told The Register.

“We are sorry that the former employee behind the unfounded allegations in today’s request is dissatisfied with his departure from the company; we wish him the best.”

The former employee in question is Grindr’s ex-chief privacy officer Ron De Jesus. In June, De Jesus filed a wrongful termination lawsuit [PDF] against his former bosses that also accused the dating app of violating privacy laws.

According to the lawsuit, De Jesus was “leading the charge to keep Grindr compliant with state, national, and international laws” after Norway’s data protection agency fined the dating app biz about $12 million in December 2021 and a Wall Street Journal article in May 2022 accused the application developer of selling users’ location data.

But despite De Jesus’ attempts, “Grindr placed profit over privacy and got rid of Mr De Jesus for his efforts and reports,” the lawsuit alleges.

EPIC’s complaint, which highlights De Jesus’ allegations, asks the FTC to look into potential violations of privacy law, including detection data retention and disclosure practices.

It also accuses Grindr of violating the Health Breach Notification Rule (HNBR). The dating app is subject to the HNBR because it asks users to self-report health data including HIV status, last-tested date, and vaccination status. By sharing these records with third parties and retaining health data after users deleted their accounts, Grindr allegedly breached the HNBR, EPIC says.

The privacy advocates at EPIC want the FTC to make Grindr comply with the laws and stop any “unlawful or impermissible” data retention practices. Additionally, the complaint calls on the federal agency to force Grindr to notify any users’ whose data was misused, and impose fines against the dating app for any violations of the HBNR.

Source: EPIC urges FTC to investigate Grindr’s data practices • The Register

Publishing A Book Means No Longer Having Control Over How Others Feel About It, Or How They’re Inspired By It. And That Includes AI.

[…]

I completely understand why some authors are extremely upset about finding out that their works were used to train AI. It feels wrong. It feels exploitive. (I do not understand their lawsuits, because I think they’re very much confused about how copyright law works. )

But, to me, many of the complaints about this amount to a similar discussion to ones we’ve had in the past, regarding concerns about if works were released without copyright, what would happen if someone “bad” reused them. This sort of thought experiment is silly, because once a work is released and enters the messy real world, it’s entirely possible for things to happen that the original creator disagrees with or hates. Someone can interpret the work in ridiculous ways. Or it can inspire bad people to do bad things. Or any of a long list of other possibilities.

The original author has the right to speak up about the bad things, or to denounce the bad people, but the simple fact is that once you’ve released a work into the world, the original author no longer has control over how that work is used and interpreted by the world. Releasing a work into the world is an act of losing control over that work and what others can do in response to it. Or how or why others are inspired by it.

But, when it comes to the AI fights, many are insisting that they want to do exactly that around AI, and much of this came to a head recently when The Atlantic released a tool that allowed anyone to search to see which authors were included in the Books3 dataset (one of multiple collections of books that have been used to train AI). This lead to a lot of people (both authors and non-authors) screaming about the evils of AI, and about how wrong it was that such books were included.

But, again, that’s the nature of releasing a work to the public. People read it. Machines might also read it. And they might use what they learn in that work to do something else. And you might like that and you might not, but it’s not really your call.

That’s why I was happy to see Ian Bogost publish an article explaining why he’s happy that his books were found in Books3, saying what those two other authors I spoke to wouldn’t say publicly. Ian is getting screamed at all over social media for this article, with most of it apparently based on the title and not on the substance. But it’s worth reading.

Whether or not Meta’s behavior amounts to infringement is a matter for the courts to decide. Permission is a different matter. One of the facts (and pleasures) of authorship is that one’s work will be used in unpredictable ways. The philosopher Jacques Derrida liked to talk about “dissemination,” which I take to mean that, like a plant releasing its seed, an author separates from their published work. Their readers (or viewers, or listeners) not only can but must make sense of that work in different contexts. A retiree cracks a Haruki Murakami novel recommended by a grandchild. A high-school kid skims Shakespeare for a class. My mother’s tree trimmer reads my book on play at her suggestion. A lack of permission underlies all of these uses, as it underlies influence in general: When successful, art exceeds its creator’s plans.

But internet culture recasts permission as a moral right. Many authors are online, and they can tell you if and when you’re wrong about their work. Also online are swarms of fans who will evangelize their received ideas of what a book, a movie, or an album really means and snuff out the “wrong” accounts. The Books3 imbroglio reflects the same impulse to believe that some interpretations of a work are out of bounds.

Perhaps Meta is an unappealing reader. Perhaps chopping prose into tokens is not how I would like to be read. But then, who am I to say what my work is good for, how it might benefit someone—even a near-trillion-dollar company? To bemoan this one unexpected use for my writing is to undermine all of the other unexpected uses for it. Speaking as a writer, that makes me feel bad.

More importantly, Bogost notes that the entire point of Books3 originally was to make sure that AI wasn’t just controlled by corporate juggernauts:

The Books3 database was itself uploaded in resistance to the corporate juggernauts. The person who first posted the repository has described it as the only way for open-source, grassroots AI projects to compete with huge commercial enterprises. He was trying to return some control of the future to ordinary people, including book authors. In the meantime, Meta contends that the next generation of its AI model—which may or may not still include Books3 in its training data—is “free for research and commercial use,” a statement that demands scrutiny but also complicates this saga. So does the fact that hours after The Atlantic published a search tool for Books3, one writer distributed a link that allows you to access the feature without subscribing to this magazine. In other words: a free way for people to be outraged about people getting writers’ work for free.

I’m not sure what I make of all this, as a citizen of the future no less than as a book author. Theft is an original sin of the internet. Sometimes we call it piracy (when software is uploaded to USENET, or books to Books3); other times it’s seen as innovation (when Google processed and indexed the entire internet without permission) or even liberation. AI merely iterates this ambiguity. I’m having trouble drawing any novel or definitive conclusions about the Books3 story based on the day-old knowledge that some of my writing, along with trillions more chunks of words from, perhaps, Amazon reviews and Reddit grouses, have made their way into an AI training set.

I get that it feels bad that your works are being used in ways you disapprove of, but that is the nature of releasing something into the world. And the underlying point of the Books3 database is to spread access to information to everyone. And that’s a good thing that should be supported, in the nature of folks like Aaron Swartz.

It’s the same reason why, even as lots of news sites are proactively blocking AI scanning bots, I’m actually hoping that more of them will scan and use Techdirt’s words to do more and to be better. The more information shared, the more we can do with it, and that’s a good thing.

I understand the underlying concerns, but that’s just part of what happens when you release a work to the world. Part of releasing something into the world is coming to terms with the fact that you no longer own how people will read it or be inspired by it, or what lessons they will take from it.

 

Source: Publishing A Book Means No Longer Having Control Over How Others Feel About It, Or How They’re Inspired By It. And That Includes AI. | Techdirt

JuMBOs Planets – but without stars to orbit. So not planets according to definition

A team of astronomers have detected over 500 planet-like objects in the inner Orion Nebula and the Trapezium Cluster that they believe could shake up the very definition of a planet.

The 4-light-year-wide Trapezium Cluster sits at the heart of the Orion Nebula, or Messier 42, about 1,400 light-years from Earth. The cluster is filled with young stars, which make their surrounding gas and dust glow with infrared light.

The Webb Space Telescope’s Near-Infrared Camera (NIRCam) observed the nebula at short and long wavelengths for nearly 35 hours between September 26, 2022, and October 2, 2022, giving researchers a remarkably sharp look at relatively small (meaning Jupiter-sized and smaller) isolated objects in the nebula. These NIRCam images are some of the largest mosaics from the telescope to date, according to a European Space Agency release. Though they cannot be hosted in all their resolved glory on this site, you can check them out on the ESASky application.

A planet, per NASA, is an object that orbits a star and is large enough to have taken on a spherical shape and have cast away other objects near its size from its orbit. According to the recent team, the Jupiter-mass binary objects (or JuMBOs) are big enough to be planetary but don’t have a star they’re clearly orbiting. Using Webb, the researchers also observed low-temperature planetary-mass objects (or PMOs). The team’s results are yet to be peer-reviewed but are currently hosted on the preprint server arXiv.

[…]

In the preprint, the team describes 540 planetary mass candidates, with the smallest masses clocking in at about 0.6 times the mass of Jupiter. According to The Guardian, analysis revealed steam and methane in the JuMBOs’ atmospheres. The researchers also found that 9% of those objects are in wide binaries, equivalent to 100 times the distance between Earth and the Sun or more. That finding is perplexing, because objects of JuMBOs’ masses typically orbit a star. In other words, the JuMBOs look decidedly planet-like but lack a key characteristic of planets.

[…]

So what are the JuMBOs? It’s still not clear whether the objects form like planets—by accreting the gas and dust from a protoplanetary disk following a star’s formation—or more like the stars themselves. The Trapezium Cluster’s stars are quite young; according to the STScI release, if our solar system were a middle-aged person, the cluster’s stars would be just three or four days old. It’s possible that objects like the JuMBOs are actually common in the universe, but Webb is the first observatory that has the ability to pick out the individual objects.

[…]

Source: Quasi-Planets Called JuMBOs Are Bopping Around in Space

Arm patches Mali GPU driver bug exploited by spyware

Commercial spyware has exploited a security hole in Arm’s Mali GPU drivers to compromise some people’s devices, according to Google today.

These graphics processors are used in a ton of gear, from phones and tablets to laptops and cars, so the kernel-level vulnerability may be present in countless equipment. This includes Android handsets made by Google, Samsung, and others.

The vulnerable drivers are paired with Arm’s Midgard (launched in 2010), Bifrost (2016), Valhall (2019), and fifth generation Mali GPUs (2023), so we imagine this buggy code will be in millions of systems.

On Monday, Arm issued an advisory for the flaw, which is tracked as CVE-2023-4211. This is a use-after-free bug affecting Midgard driver versions r12p0 to r32p0; Bifrost versions r0p0 to r42p0; Valhall versions r19p0 to r42p0; and Arm 5th Gen GPU Architecture versions r41p0 to r42p0.

We’re told Arm has corrected the security blunder in its drivers for Bifrost to fifth-gen. “This issue is fixed in Bifrost, Valhall, and Arm 5th Gen GPU Architecture Kernel Driver r43p0,” the advisory stated. “Users are recommended to upgrade if they are impacted by this issue. Please contact Arm support for Midgard GPUs.”

We note version r43p0 of Arm’s open source Mali drivers for Bifrost to fifth-gen were released in March. Midgard has yet to publicly get that version, it appears, hence why you need to contact Arm for that. We’ve asked Arm for more details on that.

What this means for the vast majority of people is: look out for operating system or manufacturer updates with Mali GPU driver fixes to install to close this security hole, or look up the open source drivers and apply updates yourself if you’re into that. Your equipment may already be patched by now, given the release in late March, and details of the bug are only just coming out. If you’re a device maker, you should be rolling out patches to customers.

“A local non-privileged user can make improper GPU memory processing operations to gain access to already freed memory,” is how Arm described the bug. That, it seems, is enough to allow spyware to take hold of a targeted vulnerable device.

According to Arm there is “evidence that this vulnerability may be under limited, targeted exploitation.” We’ve received confirmation from Google, whose Threat Analysis Group’s (TAG) Maddie Stone and Google Project Zero’s Jann Horn found and reported the vulnerability to the chip designer, that this targeted exploitation has indeed taken place.

“At this time, TAG can confirm the CVE was used in the wild by a commercial surveillance vendor,” a TAG spokesperson told The Register. “More technical details will be available at a later date, aligning with our vulnerability disclosure policy.”

[…]

 

Source: Arm patches Mali GPU driver bug exploited by spyware • The Register

Amazon Used Secret ‘Project Nessie’ Algorithm To Raise Price

Amazon used an algorithm code-named “Project Nessie” to test how much it could raise prices in a way that competitors would follow, according to redacted portions of the Federal Trade Commission’s monopoly lawsuit against the company. From a report: The algorithm helped Amazon improve its profit on items across shopping categories, and because of the power the company has in e-commerce, led competitors to raise their prices and charge customers more, according to people familiar with the allegations in the complaint. In instances where competitors didn’t raise their prices to Amazon’s level, the algorithm — which is no longer in use — automatically returned the item to its normal price point.

The company also used Nessie on what employees saw as a promotional spiral, where Amazon would match a discounted price from a competitor, such as Target.com, and other competitors would follow, lowering their prices. When Target ended its sale, Amazon and the other competitors would remain locked at the low price because they were still matching each other, according to former employees who worked on the algorithm and pricing team. The algorithm helped Amazon recoup money and improve margins. The FTC’s lawsuit redacted an estimate of how much it alleges the practice “extracted from American households,” and it also says it helped the company generate a redacted amount of “excess profit.” Amazon made more than $1 billion in revenue through use of the algorithm, according to a person familiar with the matter. Amazon stopped using the algorithm in 2019, some of the people said. It wasn’t clear why the company stopped using it.

Source: Amazon Used Secret ‘Project Nessie’ Algorithm To Raise Prices – Slashdot

radio-browser.info – a huge list of online radio streams + apps that use the list

What can radio-browser do for you?

I want to listen to radio
Please have a look at the list of apps that use this service by clicking on “Apps” in the header bar. You can also just use the search field on this webpage to find streams you want to listen to. Maybe you want a list of the most clicked streams of this service?

I want to add a stream to the database
Just click “New station” and add the stream. This service is completely automatic. More information in the FAQ. Streams CANNOT be changed at the moment by users.

I am the owner of a stream
You can add your stream. Streams can only be changed at the moment by the owner. Please follow the tutorial if you want to change your stream.

I am an app developer
Have a look at the API documentation at api.radio-browser.info

Source: radio-browser.info

Booking.com not passing on payments to hotels for months on end

Travel website Booking.com has left many hotel operators and other partners across the globe thousands of dollars out of pocket for months on end, blaming the lack of payment on a “technical issue”.

The issue is widespread in Thailand, Indonesia and Europe among hoteliers who are venting their frustrations in Facebook groups as rumours swirl about the cause of the failure to pay.

Usually, if a customer makes a booking for a hotel through the website Booking.com and elects to pay upfront, the site takes the payment and passes it on to the hotel operator, minus a commission.

Booking.com’s partners have reported issues receiving payments since July, and in some cases months earlier. While Booking.com has continued taking payments from customers, the company has not always passed on the amount owed to hotel operators and others whom the Guardian has spoken to.

In August, the Booking Group reported total revenues of $5.5bn and a profit of $1.3bn for the second quarter of 2023 – up 27% and 51% on the previous year respectively.

[…]

struggle to get in contact with anyone at Booking.com about the issue.

“There is no way to contact them. Online it says you must talk to finance or credit control, neither of whom have a phone number or email address.”

He said you can call a contact centre, which then lodges a ticket for those teams. But the ticket expires every four days, requiring another phone call to lodge a new ticket. The Guardian has been told by multiple hotel operators that this is the practice.

It has led many to attempt other ways to reach the company, including LinkedIn messaging, directs emails to the Booking group CEO and looking up individual financial officers online.

[…]

Others affected include travel bloggers and websites that are paid affiliate payments when customers click through a link on their site.

Some operators who spoke to news outlets in recent months reported being paid once their story became public. The Hungarian consumer watchdog last month launched a probe into the company’s failure to pay hotel operators in the country and raided Booking.com’s local office, after local reporting on the issue.

[…]

Infeld said merely paying back what is owed by the company is not sufficient. He wants every hotel that hasn’t been paid to be paid along with market interest and all of Booking.com commissions waived.

[…]

Source: Travel website Booking.com leaves hoteliers thousands of dollars out of pocket | Business | The Guardian

Singapore plans to scan your face instead of your passport

[…] “Singapore will be one of the first few countries in the world to introduce automated, passport-free immigration clearance,” said minister for communications and information Josephine Teo in a wrap-up speech for the bill. Teo did concede that Dubai had such clearance for select enrolled travelers, but there was no assurance of other countries planning similar actions.

[…]

Another consideration for why passports will likely remain relevant in Singapore airports is for checking in with airlines. Airlines check passports not just to confirm identity, but also visas and more. Airlines are often held responsible for stranded passengers so will likely be required to confirm travelers have the documentation required to enter their destination.

The Register asked Singapore Airlines to confirm if passports will still be required on the airline after the implementation of biometric clearance. They deferred to Changi’s operator, Changi Airport Group (CAG), which The Reg also contacted – and we will update if a relevant reply arises.

What travelers will see is an expansion of a program already taking form. Changi airport currently uses facial recognition software and automated clearance for some parts of immigration.

[…]

Passengers who pre-submit required declarations online can already get through Singapore’s current automated immigration lanes in 20 to 30 seconds once they arrive to the front of the queue. It’s one reason Changi has a reputation for being quick to navigate.

[…]

According to CAG, the airport handled 5.12 million passenger movements in June 2023 alone. This figure is expected to only increase as it currently stands at 88 percent of pre-COVID levels and the government sees such efficiency as critical to managing the impending growth.

But the reasoning for biometric clearance go beyond a boom in travelers. With an aging population and shrinking workforce, Singapore’s Immigration & Checkpoints Authority (ICA) will have “to cope without a significant increase in manpower,” said Teo.

Additionally, security threats including pandemics and terrorism call for Singapore to “go upstream” on immigration measures, “such as the collection of advance passenger and crew information, and entry restrictions to be imposed on undesirable foreigners, even before they arrive at our shores,” added the minister.

This collection and sharing of biometric information is what enables the passport-free immigration process – passenger and crew information will need to be disclosed to the airport operator to use for bag management, access control, gate boarding, duty-free purchases, as well as tracing individuals within the airport for security purposes.

The shared biometrics will serve as a “single token of authentication” across all touch points.

Members of Singapore’s parliament have raised concerns about shifting to universal automated clearance, including data privacy, and managing technical glitches.

According to Teo, only Singaporean companies will be allowed ICA-related IT contracts, vendors will be given non-disclosure agreements, and employees of such firms must undergo security screening. Traveler data will be encrypted and transported through data exchange gateways.

As for who will protect the data, that role goes to CAG, with ICA auditing its compliance.

In case of disruptions that can’t be handled by an uninterruptible power supply, off-duty officers will be called in to go back to analog.

And even though the ministry is pushing universal coverage, there will be some exceptions, such as those who are unable to provide certain biometrics or are less digitally literate. Teo promised their clearance can be done manually by immigration officers.

Source: Singapore plans to scan your face instead of your passport • The Register

Data safety is a real issue here – how long will the data be collected and for what other purposes will it be used?

UK passport and immigration images database could be repurposed to catch shoplifters

Britain’s passport database could be used to catch shoplifters, burglars and other criminals under urgent plans to curb crime, the policing minister has said.

Chris Philp said he planned to integrate data from the police national database (PND), the Passport Office and other national databases to help police find a match with the “click of one button”.

But civil liberty campaigners have warned the plans would be an “Orwellian nightmare” that amount to a “gross violation of British privacy principles”.

Foreign nationals who are not on the passport database could also be found via the immigration and asylum biometrics system, which will be part of an amalgamated system to help catch thieves.

[…]

Until the new platform is created, he said police forces should search each database separately.

[…]

Emmanuelle Andrews, policy and campaigns manager at the campaign group, said: “Time and time again the government has relied on the social issue of the day to push through increasingly authoritarian measures. And that’s just what we’re seeing here with these extremely worrying proposals to encourage the police to scan our faces as we go to buy a pint of milk and trawl through our personal information.

“By enabling the police to use private dashcam footage, as well as the immigration and asylum system, and passport database, the government are turning our neighbours, loved ones, and public service officials into border guards and watchmen.

[…]

Silkie Carlo, director of Big Brother Watch, said: “Philp’s plan to subvert Brits’ passport photos into a giant police database is Orwellian and a gross violation of British privacy principles. It means that over 45 million of us with passports who gave our images for travel purposes will, without any kind of consent or the ability to object, be part of secret police lineups.

“To scan the population’s photos with highly inaccurate facial recognition technology and treat us like suspects is an outrageous assault on our privacy that totally overlooks the real reasons for shoplifting. Philp should concentrate on fixing broken policing rather than building an automated surveillance state.

“We will look at every possible avenue to challenge this Orwellian nightmare.”

Source: UK passport images database could be used to catch shoplifters | Police | The Guardian

Also, time and again we have seen that centralised databases are a really really bad idea – the data gets stolen and misused by the operators.

Amazon Partially Wins Against EU Digital Services Act

Amazon has partially won in an EU court case related to European Union ecommerce market regulation laws, which come under the Digital Services Act (DSA).

On Thursday, the EU General Court ruled in favour of Amazon, by agreeing to suspend a requirement under the DSA that Amazon is obligated to follow and make an ads library public.

Amazon argued that the requirement to publish an ads archive would result in the disclosure of confidential information that would cause “serious and irreparable harm to its advertising activities and, by extension, to all its activities.”

The company further claimed the disclosure of the ad information would weaken its competitive position and cause an irreversible loss of market share, as well as harm its ad partners.

However, the Court did not agree to suspend a separate DSA requirement on Amazon to offer users of the store a non-profiling option powering the recommendations it serves them.

In 2022, Amazon was one of those 19 platforms that were subject to follow the strictest level of regulation under the DSA, which seeks a greater degree of transparency and accountability on larger platforms and their algorithms.

The largest ecommerce platform challenged the EU lawsuit regarding it being classified as a VLOP (very large online platform). It also filed for interim measures to suspend certain requirements under the regulation with a pending decision on the wider legal challenge.

The EU Court granted interim relief to Amazon as its activities doesn’t harm legal standard and declined to suspend DSA requirement.

Amazon’s wider challenge regarding its classification as a VLOP under the regulation still continues.

Source: Amazon Partially Wins Against EU Digital Services Act – BW Businessworld – test

Microsoft is going nuclear to power its AI ambitions

Microsoft thinks next-generation nuclear reactors can power its data centers and AI ambitions, according to a job listing for a principal program manager who’ll lead the company’s nuclear energy strategy.

Data centers already use a hell of a lot of electricity, which could thwart the company’s climate goals unless it can find clean sources of energy. Energy-hungry AI makes that an even bigger challenge for the company to overcome. AI dominated Microsoft’s Surface event last week.

[…]

The job posting says it’s hiring someone to “lead project initiatives for all aspects of nuclear energy infrastructure for global growth.”

Microsoft is specifically looking for someone who can roll out a plan for small modular reactors (SMR).

[…]

The US Nuclear Regulatory Commission just certified an SMR design for the first time in January, which allows utilities to choose the design when applying for a license for a new power plant. And it could usher in a whole new chapter for nuclear energy.

Even so, there are still kinks to work out if Microsoft wants to rely on SMRs to power the data centers where its cloud and AI live. An SMR requires more highly enriched uranium fuel, called HALEU, than today’s traditional reactors. So far, Russia has been the world’s major supplier of HALEU. There’s a push in the US to build up a domestic supply chain of uranium, which communities near uranium mines and mills are already fighting. Then there’s the question of what to do with nuclear waste, which even a fleet of SMRs can generate significant amounts of and the US is still figuring out how to store long term

[…]

Microsoft has also made an audacious deal to purchase electricity from a company called Helion that’s developing an even more futuristic fusion power plant. Both old-school nuclear reactors and SMR designs generate electricity through nuclear fission, which is the splitting apart of atoms. Nuclear fusion, involves forcing atoms together the way stars do to create their own energy. A fusion reactor is a holy grail of sorts — it would be a source of abundant clean energy that doesn’t create the same radioactive waste as nuclear fission. But despite decades of research and recent breakthroughs, most experts say a fusion power plant is at least decades away — and the world can’t wait that long to tackle climate change.

Helion’s backers also include OpenAI CEO and ChatGPT developer Sam Altman.

[…]

Source: Microsoft is going nuclear to power its AI ambitions – The Verge

Heat pumps twice as efficient as fossil fuel systems in cold weather, study finds – gas lobby misinforms, blocks uptake

Heat pumps are more than twice as efficient as fossil fuel heating systems in cold temperatures, research shows.

Even at temperatures approaching -30C, heat pumps outperform oil and gas heating systems, according to the research from Oxford University and the Regulatory Assistance Project thinktank.

[…]

The research, published in the specialist energy research journal Joule, used data from seven field studies in North America, Asia and Europe. It found that at temperatures below zero, heat pumps were between two and three times more efficient than oil and gas heating systems.

The authors said the findings showed that heat pumps were suitable for almost all homes in Europe, including the UK, and should provide policymakers with the impetus to bring in new measures to roll them out as rapidly as possible.

Dr Jan Rosenow, the director of European programmes at the Regulatory Assistance Project and co-author of the report, said: “There has been a campaign spreading false information about heat pumps [including casting doubt on whether they work in cold weather]. People [in the UK] don’t know much about heat pumps, so it’s very easy to scare them by giving them wrong information.”

The Guardian and the investigative journalism organisation DeSmog recently revealed that lobbyists associated with the gas boiler sector had attempted to delay a key government measure to increase the uptake of heat pumps.

[…]

Source: Heat pumps twice as efficient as fossil fuel systems in cold weather, study finds | Energy | The Guardian

Ransomed.vc: Using the GDPR fine as a benchmark to ransom stolen data

On August 15, 2023, the threat actor “Ransomed,” operating under the alias “RansomForums,” posted on Telegram advertising their new forum and Telegram chat channel. On the same day, the domain ransomed[.]vc was registered.

But before activity on Ransomed had even really begun, the forum was the victim of a distributed denial-of-service (DDoS) attack. In response, the operators of the site quickly pivoted to rebrand it as a ransomware blog that, similar to other ransomware collectives, would adopt the approach of publicly listing victim names while issuing threats of data exposure unless ransoms are paid.

[…]

Ransomed is leveraging an extortion tactic that has not been observed before—according to communications from the group, they use data protection laws like the EU’s GDPR to threaten victims with fines if they do not pay the ransom. This tactic marks a departure from typical extortionist operations by twisting protective laws against victims to justify their illegal attacks.

[…]

The group has disclosed ransom demands for its victims, which span from €50,000 EUR to €200,000 EUR. For comparison, GDPR fines can climb into the millions and beyond—the highest ever was over €1 billion EUR. It is likely that Ransomed’s strategy is to set ransom amounts lower than the price of a fine for a data security violation, which may allow them to exploit this discrepancy in order to increase the chance of payment.

As of August 28, Ransomed operators have listed two Bitcoin addresses for payment on their site. Typically, threat actors do not make their wallet addresses public, instead sharing them directly with victims via a ransom note or negotiations portal.

These unconventional choices have set Ransomed apart from other ransomware operations, although it is still unproven if their tactics will be successful.

[…]

It is likely that Ransomed is a financially motivated project, and one of several other short-lived projects from its creators.

The owner of the Ransomed Telegram chat claims to have the source code of Raid Forums and said they intend to use it in the future, indicating that while the owner is running a ransomware blog for now, there are plans to turn it back into a forum later—although the timeline for this reversion is not clear.

The forum has gained significant attention in the information security community and in threat communities for its bold statements of targeting large organizations. However, there is limited evidence that the attacks published on the Ransomed blog actually took place, beyond the threat actors’ claims.

[…]

As the security community continues to monitor this enigmatic group’s activities, one thing remains clear: the landscape of ransomware attacks continues to evolve, challenging defenders to adapt and innovate in response.

Source: The Emergence of Ransomed: An Uncertain Cyber Threat in the Making | Flashpoint

The Milky Way’s Mass is Much Lower Than We Thought

How massive is the Milky Way? It’s an easy question to ask, but a difficult one to answer. Imagine a single cell in your body trying to determine your total mass, and you get an idea of how difficult it can be. Despite the challenges, a new study has calculated an accurate mass of our galaxy, and it’s smaller than we thought.

One way to determine a galaxy’s mass is by looking at what’s known as its rotation curve. Measure the speed of stars in a galaxy versus their distance from the galactic center. The speed at which a star orbits is proportional to the amount of mass within its orbit, so from a galaxy’s rotation curve you can map the function of mass per radius and get a good idea of its total mass. We’ve measured the rotation curves for several nearby galaxies such as Andromeda, so we know the masses of many galaxies quite accurately.

But since we are in the Milky Way itself, we don’t have a great view of stars throughout the galaxy. Toward the center of the galaxy, there is so much gas and dust we can’t even see stars on the far side. So instead we measure the rotation curve using neutral hydrogen, which emits faint light with a wavelength of about 21 centimeters. This isn’t as accurate as stellar measurements, but it has given us a rough idea of our galaxy’s mass. We’ve also looked at the motions of the globular clusters that orbit in the halo of the Milky Way. From these observations, our best estimate of the mass of the Milky Way is about a trillion solar masses, give or take.

The distribution of stars seen by the Gaia surveys. Credit: Data: ESA/Gaia/DPAC, A. Khalatyan(AIP) & StarHorse team; Galaxy map: NASA/JPL-Caltech/R. Hurt

This new study is based on the third data release of the Gaia spacecraft. It contains the positions of more than 1.8 billion stars and the motions of more than 1.5 billion stars. While this is only a fraction of the estimated 100-400 billion stars in our galaxy, it is a large enough number to calculate an accurate rotation curve. Which is exactly what the team did. Their resulting rotation curve is so precise, that the team could identify what’s known as the Keplerian decline. This is the outer region of the Milky Way where stellar speeds start to drop off roughly in accordance with Kepler’s laws since almost all of the galaxy’s mass is closer to the galactic center.

The Keplerian decline allows the team to place a clear upper limit on the mass of the Milky Way. What they found was surprising. The best fit to their data placed the mass at about 200 billion solar masses, which is a fifth of previous estimates. The absolute upper mass limit for the Milky Way is 540 billion, meaning that the Milky Way is at least half as massive as we thought. Given the amount of known regular matter in the galaxy, this means the Milky Way has significantly less dark matter than we thought.

Source: The Milky Way’s Mass is Much Lower Than We Thought – Universe Today

Firefox now has private browser-based website translation – no cloud servers required

Web browsers have had tools that let you translate websites for years. But they typically rely on cloud-based translation services like Google Translate or Microsoft’s Bing Translator.

The latest version of Mozilla’s Firefox web browser does things differently. Firefox 118 brings support for Fullpage Translation, which can translate websites entirely in your browser. In other words, everything happens locally on your computer without any data sent to Microsoft, Google, or other companies.

Here’s how it works. Firefox will notice when you visit a website in a supported language that’s different from your default language, and a translate icon will show up in the address bar.

Tap that icon and you’ll see a pop-up window that asks what languages you’d like to translate from and to. If the browser doesn’t automatically detect the language of the website you’re visiting, you can set these manually.

Then click the “Translate” button, and a moment later the text on the page should be visible in your target language. If you’d prefer to go back to the original language, just tap the translate icon again and choose the option that says “show original.”

You can also tap the settings icon in the translation menu and choose to “always translate” or “never translate” a specific language so that you won’t have to manually invoke the translation every time you visit sites in that language.

Now for the bad news: Firefox Fullpage Translation only supports 9 languages so far:

  • Bulgarian
  • Dutch
  • English
  • French
  • German
  • Italian
  • Polish
  • Portuguese
  • Spanish

[…]

Source: Firefox 118 brings browser-based website translation (no cloud servers required… for a handful of supported languages) – Liliputing

Feds Probing Tesla For Lying About EV Ranges, Bullshitting Customers Who Complained

Back in July, Reuters released a bombshell report documenting how Tesla not only spent a decade falsely inflating the range of their EVs, but created teams dedicated to bullshitting Tesla customers who called in to complain about it. If you recall, Reuters noted how these teams would have a little, adorable party every time they got a pissed off user to cancel a scheduled service call. Usually by lying to them:

“Inside the Nevada team’s office, some employees celebrated canceling service appointments by putting their phones on mute and striking a metal xylophone, triggering applause from coworkers who sometimes stood on desks. The team often closed hundreds of cases a week and staffers were tracked on their average number of diverted appointments per day.”

The story managed to stay in the headlines for all of a day or two, quickly supplanted by gossip surrounding a non-existent Elon Musk Mark Zuckerberg fist fight.

But here in reality, Tesla’s routine misrepresentation of their product (and almost joyous gaslighting of their paying customers) has caught the eye of federal regulators, who are now investigating the company for fraudulent behavior:

“federal prosecutors have opened a probe into Tesla’s alleged range-exaggerating scheme, which involved rigging its cars’ software to show an inflated range projection that would then abruptly switch to an accurate projection once the battery dipped below 50% charged. Tesla also reportedly created an entire secret “diversion team” to dissuade customers who had noticed the problem from scheduling service center appointments.”

This pretty clearly meets the threshold definition of “unfair and deceptive” under the FTC Act, so this shouldn’t be that hard of a case. Of course, whether it results in any sort of meaningful penalties or fines is another matter entirely. It’s very clear Musk historically hasn’t been very worried about what’s left of the U.S. regulatory and consumer protection apparatus holding him accountable for… anything.

Still, it’s yet another problem for a company that’s facing a flood of new competitors with an aging product line. And it’s another case thrown in Tesla’s lap on top of the glacially-moving inquiry into the growing pile of corpses caused by obvious misrepresentation of under-cooked “self driving” technology, and an investigation into Musk covertly using Tesla funds to build himself a glass mansion.

Source: Feds Probing Tesla For Lying About EV Ranges, Bullshitting Customers Who Complained | Techdirt