Amazon Now Punishes Merchants Who Ship Their Own Products – flexing monopoly!

Third-party merchants on Amazon who ship their own packages will see an additional fee for each product sold starting on Oct. 1st. Sellers could previously choose to ship their products without contributing to Amazon, but the new fee means members of Amazon’s Seller Fulfilled Prime program will be required to pay the company 2% on each product sold.

The new surcharge is in addition to other payments Amazon receives from merchants starting with the selling plan which costs $0.99 for each product sold or $39.99 per month for an unlimited number of sales. The company also charges a referral fee for each item sold, with most ranging between 8% and 15% depending on the product category.

Since the program launched in 2015, merchants could independently ship their products without paying a fee to Amazon but the new shipping charge may add pressure to switch to the company’s in-house service. As it stands, sellers can already incur other additional charges including fees for stocking inventory, rental book service, high-volume listings, and a refund administration fee, although Amazon does not list the costs on its website.

[…]

Source: Amazon Now Punishes Merchants Who Ship Their Own Products

This is a problem where Amazon is using it’s position to create a logistics monopoly and putting other logistics firms out of business. Amazon should stick to being a marketplace and this should be enforced by government.

Scientists Recreate Pink Floyd Song By Reading Brain Signals of Listeners

Scientists have trained a computer to analyze the brain activity of someone listening to music and, based only on those neuronal patterns, recreate the song. The research, published on Tuesday, produced a recognizable, if muffled version of Pink Floyd’s 1979 song, “Another Brick in the Wall (Part 1).” […] To collect the data for the study, the researchers recorded from the brains of 29 epilepsy patients at Albany Medical Center in New York State from 2009 to 2015. As part of their epilepsy treatment, the patients had a net of nail-like electrodes implanted in their brains. This created a rare opportunity for the neuroscientists to record from their brain activity while they listened to music. The team chose the Pink Floyd song partly because older patients liked it. “If they said, ‘I can’t listen to this garbage,'” then the data would have been terrible, Dr. Schalk said. Plus, the song features 41 seconds of lyrics and two-and-a-half minutes of moody instrumentals, a combination that was useful for teasing out how the brain processes words versus melody.

Robert Knight, a neuroscientist at the University of California, Berkeley, and the leader of the team, asked one of his postdoctoral fellows, Ludovic Bellier, to try to use the data set to reconstruct the music “because he was in a band,” Dr. Knight said. The lab had already done similar work reconstructing words. By analyzing data from every patient, Dr. Bellier identified what parts of the brain lit up during the song and what frequencies these areas were reacting to. Much like how the resolution of an image depends on its number of pixels, the quality of an audio recording depends on the number of frequencies it can represent. To legibly reconstruct “Another Brick in the Wall,” the researchers used 128 frequency bands. That meant training 128 computer models, which collectively brought the song into focus. The researchers then ran the output from four individual brains through the model. The resulting recreations were all recognizably the Pink Floyd song but had noticeable differences. Patient electrode placement probably explains most of the variance, the researchers said, but personal characteristics, like whether a person was a musician, also matter.

The data captured fine-grained patterns from individual clusters of brain cells. But the approach was also limited: Scientists could see brain activity only where doctors had placed electrodes to search for seizures. That’s part of why the recreated songs sound like they are being played underwater. […] The researchers also found a spot in the brain’s temporal lobe that reacted when volunteers heard the 16th notes of the song’s guitar groove. They proposed that this particular area might be involved in our perception of rhythm. The findings offer a first step toward creating more expressive devices to assist people who can’t speak. Over the past few years, scientists have made major breakthroughs in extracting words from the electrical signals produced by the brains of people with muscle paralysis when they attempt to speak.

Source: Scientists Recreate Pink Floyd Song By Reading Brain Signals of Listeners – Slashdot

Snapchat’s My AI Goes Rogue, Posts To Stories

On Tuesday, Snapchat’s My AI in-app chatbot posted its own Story to the app that appeared to be a photo of a wall and ceiling. It then stopped responding to users’ messages, which some Snapchat users found disconcerting. TechCrunch reports: Though the incident made for some great tweets (er, posts), we regret to inform you that My AI did not develop self-awareness and a desire to express itself through Snapchat Stories. Instead, the situation arose because of a technical outage, just as the bot explained. Snap confirmed the issue, which was quickly addressed last night, was just a glitch. (And My AI wasn’t snapping photos of your room, by the way). “My AI experienced a temporary outage that’s now resolved,” a spokesperson told TechCrunch.

However, the incident does raise the question as to whether or not Snap was considering adding new functionality to My AI that would allow the AI chatbot to post to Stories. Currently, the AI bot sends text messages and can even Snap you back with images — weird as they may be. But does it do Stories? Not yet, apparently. “At this time, My AI does not have Stories feature,” a Snap spokesperson told us, leaving us to wonder if that may be something Snap has in the works.

Source: Snapchat’s My AI Goes Rogue, Posts To Stories – Slashdot

Tesla’s New Range-Limited Model S, X Can Go Further, but are software locked

Tesla has added a new Standard Range trim for both its aging Model S and Model X luxury cars this week, effectively slashing the barrier to entry for the automaker’s flagship sedan and SUV by a staggering $10,000 each. The Model S SR now comes in at $78,490, and the Model X SR at $88,490—both before the automaker’s mandatory $1,390 destination and $250 order fees.

As the name suggests, the $10,000 trade-off is how far the vehicle can travel on a charge. Model S gets an 85-mile reduction to 320 miles (down from 405 miles) and Model X shaves off 79 miles from its range, resulting in 269 miles to a charge (down from 348 miles). There’s just one catch that might rankle new SR owners: all Model S and X vehicles reportedly use the same gross capacity battery pack regardless of trim. In other words, the Standard Range variants have been software locked at a lower usable capacity to justify the price difference.

Electric Vehicles photo

News of the pack being software-locked comes from Electrek which confirmed the suspicion by speaking with several Tesla employees on the matter. Unfortunately, Tesla was unable to validate the rumor officially since it dissolved its communications and public relations department some time ago.

Software locking a battery pack at a lower usable capacity is an old trick Tesla pulled from its sleeve that was previously used to limit early Model S cars to 60 kWh, down from 75 kWh. With these new configurations, the EV maker has also slowed the zero to 60 MPH sprint from 3.1 to 3.7 seconds in the Model S and from 3.8 to 4.4 seconds in the Model X.

[…]

Whether Tesla will let owners to “unlock” the remainder of the car’s battery as an over-the-air purchase later on is currently unclear. Tesla previously allowed owners of early Model S 60D vehicles to pay $4,500 to access an additional 15 kWh of usable battery (it later reduced the price to $2,000), whereas Model X owners have paid as much as $9,000 for the same privilege in the past.

[…]

Source: Tesla’s New Range-Limited Model S, X Carry a Big Discount | The Drive

BMW and Mercedes are also locking features that you already paid for – because you own the hardware of the car – behind paywalls. It’s something that really these companies shouldn’t be allowed to get away with.

Blended Wing Body Demonstrator Jet Contract Awarded By Air Force

The U.S. Air Force says it has picked aviation startup JetZero to design and build a full-size demonstrator aircraft with a blended wing body, or BWB, configuration. The goal is for the aircraft, which has already received the informal moniker XBW-1, to be flying by 2027.

Secretary of the Air Force Frank Kendall made the announcement about JetZero‘s selection at an event today hosted by the Air & Space Forces Association. The service hopes this initiative will offer a pathway to future aerial refueling tankers and cargo aircraft that are significantly more fuel efficient than existing types with more traditional planforms. They can also possess even heavier lifting abilities with large amounts of internal volume, among other advantages. In this way, it could help inform requirements for the Next-Generation Air Refueling System (NGAS) and Next-Generation Airlift (NGAL) programs, which the Air Force is still in the process of refining.

“Blended wing body aircraft have the potential to significantly reduce fuel demand and increase global reach,” Secretary Kendall said in a statement in a separate press release. “Moving forces and cargo quickly, efficiently, and over long distance[s] is a critical capability to enable national security strategy.”

A rendering that JetZero previously released showing its BWB concept. <em>JetZero</em>

A rendering that JetZero previously released showing its BWB concept. JetZero

The service’s Office of Energy, Installations, and Environment, is leading this initiative in cooperation with the Department of Defense’s Defense Innovation Unit (DIU). DIU is tasked with “accelerating the adoption of leading commercial technology throughout the military,” according to its website. Secretary Kendall said that NASA has also made important contributions to the effort.

“As outlined in the fiscal year 2023 National Defense Authorization Act, the Department of Defense plans to invest $235 million over the next four years to fast-track the development of this transformational dual-use technology, with additional private investment expected,” according to the Air Force’s press release. Additional funding will come from other streams, as well.

The Air Force and DIU have been considering bids for more than a year and by last month had reportedly narrowed the field down to just two competitors. JetZero is the only company to have previously publicly confirmed it was proposing a design, which it calls the Z-5, for the new BWB initiative. The company has partnered with Northrop Grumman on this project. Scaled Composites, a wholly-owned Northrop Grumman subsidiary that is well known for its bleeding-edge aerospace design and rapid prototyping capabilities, will specifically be supporting this work.

A rendering of JetZero's BWB concept configured as a tanker, with F-35A Joint Strike Fighters flying in formation and receiving fuel. <em>JetZero</em>

A rendering of JetZero’s BWB concept configured as a tanker, with F-35A Joint Strike Fighters flying in formation and receiving fuel. JetZero

A formal request for information issued last year outlined the main goals of the BWB project as centering on a design that would be at least 30 percent more aerodynamically efficient than a Boeing 767 or an Airbus A330. These two commercial airliners are notably the basis for the Boeing KC-46A Pegasus tanker (which has a secondary cargo-carrying capability), dozens of which are in Air Force service now, and the Airbus A330 Multi-Role Tanker Transport (MRTT).

A US Air Force KC-46A Pegasus tanker. <em>USAF</em>

A US Air Force KC-46A Pegasus tanker. USAF

The hope is that the BWB design, combined with unspecified advanced engine technology, could lead to substantially increased fuel efficiency. This, in turn, could allow future Air Force tankers and cargo aircraft based on the core design concept to fly further while carrying similar or even potentially greater payloads than are possible with the service’s current fleets.

“Several military transport configurations are possible with the BWB,” the Air Force’s press release notes. “Together, these aircraft types account for approximately 60% of the Air Force’s total annual jet fuel consumption.”

“We see benefits in both air refueling at range where you can get much more productivity—much more fuel delivered—as well as cargo,” Deputy Assistant Secretary of the Air Force for Operational Energy had also said during a presentation at the Global Air and Space Chiefs Conference in London in July.

[…]

A rendering of a past BWB design concept from Boeing. <em>Boeing</em>

A rendering of a past BWB design concept from Boeing. Boeing

[…]

Looking at the latest rendering, one thing that has immediately stood out to us is the potential signature management benefits of the design. Beyond having no vertical tail and the general blended body planform, which can already offer radar cross-section advantages, the top-mounted engines positioned at the rear of the fuselage are shielded from most aspects below. This could have major beneficial impacts on the aircraft’s infrared signature, as well as how it appears on radar under many circumstances.

A close-up of the rear end of the latest rendering of JetZero's blended wing body design concept. <em>USAF</em>

A close-up of the rear end of the latest rendering of JetZero’s blended wing body design concept. USAF

JetZero has previously highlighted how the engine configuration directs sound waves upward, which the company says will reduce its noise signature while in flight, at least as perceived below. This has been touted as beneficial for commercial applications, where noise pollution could be a major issue, but could be useful for versions configured for military roles, as well. A quieter military transport aircraft, for instance, would be advantageous for covert or clandestine missions.

A screen capture from a part of JetZero's website discussing the noise signature benefits of its blended wing body design. <em>JetZero</em>

A screen capture from a part of JetZero’s website discussing the noise signature benefits of its blended wing body design. JetZero

The latest rendering for JetZero’s concept also shows passenger windows and doors along the side of the forward fuselage, highlighting its potential use for transporting personnel, as well as cargo. The company is already pitching the core design as a potential high-efficiency mid-market commercial airliner with a 230 to 250-passenger capacity and significant range in addition to military roles.

A close up of the front end of JetZero's blended wing body design concept from the latest rendering showing the passenger windows and doors along the side. <em>USAF</em>

A close up of the front end of JetZero’s blended wing body design concept from the latest rendering showing the passenger windows and doors along the side. USAF

[…]

A blended wing body concept from the late 1980s credited to McDonnell-Douglas' engineer Robert Liebeck. Liebeck is among those now working for JetZero. <em>NASA via AviationWeek</em>

A blended wing body concept from the late 1980s credited to McDonnell-Douglas’ engineer Robert Liebeck. Liebeck is among those now working for JetZero. NASA via AviationWeek

“You’re looking at something with roughly a 50% greater efficiency here, right? So,… first order you’re talking about doubling the ranges or possibly doubling the payloads,” Tom Jones, Northrop Grumman Vice President and president of the company’s aeronautics sector, who was also present at today’s event, added. “Additionally, the folded wing type of design gives you a smaller spot factor so you can fit… more aircraft at potentially a remote location. And the aircraft is also capable of some degree of short takeoff [and] landing type things…”

A screen capture from a JetZero promotional video showing project fuel savings for its blended wing body design depending on configuration compared to aircraft with more traditional designs.<em> JetZero capture</em>

A screen capture from a JetZero promotional video showing project fuel savings for its blended wing body design depending on configuration compared to aircraft with more traditional designs. JetZero capture

“Having a lifting body is a great way to get off the ground quicker,” JetZero’s O’Leary also noted with regard to shorter takeoff and landing capabilities.

These performance improvements could have a number of significant operational benefits for the Air Force when it comes to future tanker and cargo aircraft.

Being able to operate from “shorter runways, [across] longer distances, [with] better efficiency to carry the same payload and get it to places” are all of interest to the Air Force, Maj. Gen. Albert Miller, the Director of Strategy, Plans, Requirements, and Programs at Air Mobility Command, explained.

[…]

Maj. Gen. Miller also stressed that the BWB demonstrator would not necessarily directly meet the Air Force’s demands for future tankers or airlifters. He did add that the design would definitely help inform those requirements and could still be a solution to the operational issues he had highlighted in regard to a future major conflict in the Pacific region.

[…]

A rendering of JetZero's blended wing body design concept configured as a tanker refueling a notional future stealthy combat jet. Stealthy drones are also seen flying in formation with the crewed aircraft. <em>JetZero</em>

A rendering of JetZero’s blended wing body design concept configured as a tanker refueling a notional future stealthy combat jet. Stealthy drones are also seen flying in formation with the crewed aircraft. JetZero

“Why now? Because there’s no time to wait,” Dr. Ravi Chaudhary, Assistant Secretary of the Air Force for Energy, Installations, and Environment, who also happens to be a retired Air Force officer who flew C-17A Globemaster III cargo planes, said at today’s event. “And all of you have recognized that we’ve entered a new era of great power competition in which the PRC [People’s Republic of China] has come to be known as our pacing challenge.”

[…]

“We’re in a race for technological superiority with what we call a pacing challenge, a formidable opponent [China], and that requires us to find new ways, new methods, and new processes to get the kind of advantage that we’ve become used to and need to preserve,” Secretary Kendall had said in his opening remarks. “And that competitive advantage can be found in the ability to develop and field superior technology to meet our warfighter requirements and to do so faster than our adversaries. Today, that spirit of innovation continues with the Blended Wing Body Program and the demonstration project.”

Kendall added that the potential benefits for the commercial aviation sector offered valuable opportunities for further partnerships.

A rendering of a JetZero blended wing body airliner at a civilian airport. <em>JetZero</em>

A rendering of a JetZero blended wing body airliner at a civilian airport. JetZero

[…]

As the project now gets truly underway, more information about the BWB initiative from the government and industry sides will likely emerge. From what we have seen and heard already, the program could have significant impacts on future military and commercial aviation developments.

Source: Blended Wing Body Demonstrator Jet Contract Awarded By Air Force (Updated)

‘Flying Aliens’ Harassing Peruvian Village Are Actually Illegal Miners With Jetpacks

The mysterious attacks began on July 11. “Strange beings,” locals said, visiting an isolated Indigenous community in rural Peru at night, harassing its inhabitants and attempting to kidnap a 15-year-old girl. […] News of the alleged extraterrestrial attackers quickly spread online as believers, skeptics, and internet sleuths around the world analyzed grainy videos posted by members of the Ikitu community. The reported sightings came on the heels of U.S. congressional hearings about unidentified aerial phenomenon that ignited a global conversation about the possibility of extraterrestrial life visiting Earth.

Members of the Peruvian Navy and Police traveled to the isolated community, which is located 10 hours by boat from the Maynas provincial capital of Iquitos, to investigate the strange disturbances in early August. Last week, authorities announced that they believed the perpetrators were members of illegal gold mining gangs from Colombia and Brazil using advanced flying technology to terrorize the community, according to RPP Noticias. Carlos Castro Quintanilla, the lead investigator in the case, said that 80 percent of illegal gold dredging in the region is located in the Nanay river basin, where the Ikitu community is located.

One of the key pieces to the investigation was related to the attempted kidnapping of a 15-year-old girl on July 29. Cristian Caleb Pacaya, a local teacher who witnessed the attack, said that they “were using state of the art technology, like thrusters that allow people to fly.” He said that after looking the devices up on Google, he believed that they were “jetpacks.” Authorities have not made any arrests related to the attacks, nor named the alleged assailants or their organization directly. However, the prosecutors office claimed that they had destroyed 110 dredging operations and 10 illegal mining camps in the area already in 2023.

Source: ‘Flying Aliens’ Harassing Village Are Actually Illegal Miners With Jetpacks – Slashdot

Study gets monkeys drunk for 12 months and doing 9 drinks a day. Injects dopamine inhibitors and discovers they don’t want to do much of anything any more.

[…] a new study published on Monday in the journal Nature Medicine. The gene therapy was tested on macaque monkeys over 12 months, revealing promising results.

[…]

At the beginning of the study, the monkeys were gradually given alcohol until an addiction was established. Then, they began self-regulating their own intake at an amount equating to roughly nine drinks per day for a human. The researchers separated the macaques into a control group and a separate group that received the gene therapy.

According to the study, the monkeys’ daily alcohol consumption increased over the first six months before an eight-week abstinence period was initiated. The gene therapy was applied by inserting two small holes in the macaques’ skulls, and researchers injected a gene that makes the glial-derived neurotrophic factor, or GDNF protein, which stimulates the amount of dopamine produced. Then the monkeys were given the option to drink water or alcohol for four weeks.

What researchers found astounded them. Just one round of gene therapy resulted in the test group reducing their drinking by 50% compared to the control group which didn’t receive therapy. Subsequent test periods used a four-week window of drinking and a four-week window of abstinence. With each round of therapy, researchers found the test group voluntarily consumed less alcohol after the abstinence period, and by the end of the 12-month study, that amount dropped by more than 90%.

[…]

However, researchers also found that the therapy could also influence other behaviors such as weight loss and water intake. The macaques in the test group drank less water compared to the control group and lost about 18% of their body weight

[…]

Source: Gene Therapy Rewired Monkey’s Brains to Treat Alcohol Use Disorder, Study Finds

I want video of this lab!

Apple 2017 Batterygate finally leads to $500m payout, no more courts (in the US, in the UK – tbc)

Apple’s “Batterygate” legal saga is finally swinging shut – in the US, at least – with a final appeal being voluntarily dismissed, clearing the way for payouts to class members.

The US lawsuit, which combined 66 separate class actions into one big legal proceeding in California, was decided in 2020, with the outcome requiring Apple to pay out between $310 million and $500 million to claimants.

Some US claimants were unhappy with the outcome of the case, and appealed to the Ninth Circuit Court of Appeals. That appeal was finally dropped last week, allowing for payments to those who filed a claim before October 6, 2020, to begin. With around 3 million claims received, claimants will be due around $65 each.

“The settlement is the result of years of investigation and hotly contested litigation. We are extremely proud that this deal has been approved, and following the Ninth Circuit’s order, we can finally provide immediate cash payments to impacted Apple customers,” said Mark Molumphy, an attorney for plaintiffs in the case.

Apple didn’t respond to our questions.

A settlement nearly a decade in the making

For those who’ve chosen to forget about the whole Batterygate fiasco, it all started in 2016 when evidence began pointing to Apple throttling CPUs in older iPhones to prevent accelerated battery drain caused by newer software and loss of battery capacity in aging devices.

Devices affected by Apple’s CPU throttling include iPhone 6 and 7 series handsets as well as the first-generation iPhone SE.

Apple admitted as much in late 2017, and just a day later lawsuits began pouring in around the US from angry iDevice owners looking for recompense. Complaints continued into 2020 from users of older iPhones updated to iOS 14.2, who said their devices started overheating and the battery would drain in mere minutes.

The US case, as mentioned above, was decided in favor of the plaintiffs in 2020, though late last year the settlement was overturned by the Ninth Circuit, which said the lower court judge had applied the wrong legal standard in making his decision. The settlement was reinstated after a second examination earlier this year.

The reason for the objection and its withdrawal isn’t immediately clear. Lawyers for Sarah Feldman and Hondo Jan, who filed both objections to the settlement, didn’t immediately respond to questions from The Register.

Apple also won’t be completely off the hook for its iPhone throttling – it’s also facing a similar complaint in the UK, where a case was filed last year that Apple asked to have tossed in May. That attempt failed, and hearings in the case are scheduled for late August and early September.

The UK case, brought by consumer advocate Justin Gutmann, is seeking to recover ​​£1.6 billion ($2 billion) from Apple if, like the US case, the courts end up deciding against Cook and co.

Source: Apple Batterygate appeals saga comes to an end in the US • The Register

Virgin Galactic successfully flies tourists to space for first time

Virgin Galactic’s VSS Unity, the reusable rocket-powered space plane carrying the company’s first crew of tourists to space, successfully launched and landed on Thursday.

The mission, known as Galactic 02, took off shortly after 11am ET from Spaceport America in New Mexico.

Aboard the spacecraft were six individuals total – the space plane’s commander and former Nasa astronaut CJ Sturckow, the pilot Kelly Latimer, as well as Beth Moses, Virgin Galactic’s chief astronaut instructor who trained the crew before to the flight.

The spacecraft also carryied three private passengers, including the health and wellness coach Keisha Schahaff and her 18-year-old daughter, Anastasia Mayers, both of whom are Antiguan.

According to Space.com, Schahaff won her seat aboard the Galactic 02 as part of a fundraising competition by Space for Humanity, a non-profit organization seeking to democratize space travel. Mayers is studying philosophy and physics at Aberdeen University in Scotland. Together, Schahaff and Mayers are the first mother-daughter duo to venture to space together.

[…]

Source: Virgin Galactic successfully flies tourists to space for first time | Virgin Galactic | The Guardian

Canon is getting away with printers that won’t scan without ink — but HP might pay

Were you hoping Canon might be held accountable for its all-in-one printers that mysteriously can’t scan when they’re low on ink, forcing you to buy more? Tough: the lawsuit we told you about last year quietly ended in a private settlement rather than becoming a big class-action.

I just checked, and a judge already dismissed David Leacraft’s lawsuit in November, without Canon ever being forced to show what happens when you try to scan without a full ink cartridge. (Numerous Canon customer support reps wrote that it simply doesn’t work.)

Here’s the good news: HP, an even larger and more shameless manufacturer of printers, is still possibly facing down a class-action suit for the same practice.

As Reuters reports, a judge has refused to dismiss a lawsuit by Gary Freund and Wayne McMath that alleges many HP printers won’t scan or fax documents when their ink cartridges report that they’ve run low.

[…]

Interestingly, neither Canon nor HP spent any time trying to argue their printers do scan when they’re low on ink in the lawsuit responses I’ve read. Perhaps they can’t deny it? Epson, meanwhile, has an entire FAQ dedicated to reassuring customers that it hasn’t pulled that trick since 2008. (Don’t worry, Epson has other forms of printer enshittification.)

[…]

Source: Canon is getting away with printers that won’t scan sans ink — but HP might pay

CNET Deletes Thousands of Old Articles to Game Google Search

Tech news website CNET has deleted thousands of old articles over the past few months in a bid to improve its performance in Google Search results, Gizmodo has learned.

Archived copies of CNET’s author pages show the company deleted small batches of articles prior to the second half of July, but then the pace increased. Thousands of articles disappeared in recent weeks. A CNET representative confirmed that the company was culling stories but declined to share exactly how many it has taken down.

[…]

Taylor Canada, CNET’s senior director of marketing and communications. “In an ideal world, we would leave all of our content on our site in perpetuity. Unfortunately, we are penalized by the modern internet for leaving all previously published content live on our site.”

[…]

CNET shared an internal memo about the practice. Removing, redirecting, or refreshing irrelevant or unhelpful URLs “sends a signal to Google that says CNET is fresh, relevant and worthy of being placed higher than our competitors in search results,” the document reads.

According to the memo about the “content pruning,” the company considers a number of factors before it “deprecates” an article, including SEO, the age and length of the story, traffic to the article, and how frequently Google crawls the page. The company says it weighs historical significance and other editorial factors before an article is taken down. When an article is slated for deletion, CNET says it maintains its own copy, and sends the story to the Internet Archive’s Wayback Machine.

[…]

Google does not recommend deleting articles just because they’re considered “older,” said Danny Sullivan, the company’s Public Liaison for Google Search. In fact, the practice is something Google has advised against for years. After Gizmodo’s request for comment, Sullivan posted a series of tweets on the subject.

“Are you deleting content from your site because you somehow believe Google doesn’t like ‘old’ content? That’s not a thing! Our guidance doesn’t encourage this,” Sullivan tweeted.

[…]

However, SEO experts told Gizmodo content pruning can be a useful strategy in some cases, but it’s an “advanced” practice that requires high levels of expertise,[…]

Ideally outdated pages should be updated or redirected to a more relevant URL, and deleting content without a redirect should be a last resort. With fewer irrelevant pages on your site, the idea is that Google’s algorithms will be able to index and better focus on the articles or pages a publisher does want to promote.

Google may have an incentive to withhold details about its Search algorithm, both because it would rather be able to make its own decisions about how to rank websites, and because content pruning is a delicate process that can cause problems for publishers—and for Google—if it’s mishandled.

[…]

Whether or not deleting articles is an effective business strategy, it causes other problems that have nothing to do with search engines. For a publisher like CNET — one of the oldest tech news sites on the internet — removing articles means losing parts of the public record that could have unforeseen historical significance in the future.

[…]

Source: CNET Deletes Thousands of Old Articles to Game Google Search

That’s a big chunk of history gone there

Nearly every AMD CPU since 2017 vulnerable to Inception bug

AMD processor users, you have another data-leaking vulnerability to deal with: like Zenbleed, this latest hole can be to steal sensitive data from a running vulnerable machine.

The flaw (CVE-2023-20569), dubbed Inception in reference to the Christopher Nolan flick about manipulating a person’s dreams to achieve a desired outcome in the real world, was disclosed by ETH Zurich academics this week.

And yes, it’s another speculative-execution-based side-channel that malware or a rogue logged-in user can abuse to obtain passwords, secrets, and other data that should be off limits.

Inception utilizes a previously disclosed vulnerability alongside a novel kind of transient execution attack, which the researchers refer to as training in transient execution (TTE), to leak information from an operating system kernel at a rate of 39 bytes per second on vulnerable hardware. In this case, vulnerable systems encompasses pretty much AMD’s entire CPU lineup going back to 2017, including its latest Zen 4 Epyc and Ryzen processors.

Despite the potentially massive blast radius, AMD is downplaying the threat while simultaneously rolling out microcode updates for newer Zen chips to mitigate the risk. “AMD believes this vulnerability is only potentially exploitable locally, such as via downloaded malware,” the biz said in a public disclosure, which ranks Inception “medium” in severity.

Intel processors weren’t found to be vulnerable to Inception, but that doesn’t mean they’re entirely in the clear. Chipzilla is grappling with its own separate side-channel attack disclosed this week called Downfall.

How Inception works

As we understand it, successful exploitation of Inception takes advantage of the fact that in order for modern CPUs to achieve the performance they do, processor cores have to cut corners.

Rather than executing instructions strictly in order, the CPU core attempts to predict which ones will be needed and runs those out of sequence if it can, a technique called speculative execution. If the core guesses incorrectly, it discards or unwinds the computations it shouldn’t have done. That allows the core to continue getting work done without having to wait around for earlier operations to complete. Executing these instructions speculatively is also known as transient execution, and when this happens, a transient window is opened.

Normally, this process renders substantial performance advantages, and refining this process is one of several ways CPU designers eke out instruction-per-clock gains generation after generation. However, as we’ve seen with previous side-channel attacks, like Meltdown and Spectre, speculative execution can be abused to make the core start leaking information it otherwise shouldn’t to observers on the same box.

Inception is a fresh twist on this attack vector, and involves two steps. The first takes advantage of a previously disclosed vulnerability called Phantom execution (CVE-2022-23825) which allows an unprivileged user to trigger a misprediction — basically making the core guess the path of execution incorrectly — to create a transient execution window on demand.

This window serves as a beachhead for a TTE attack. Instead of leaking information from the initial window, the TTE injects new mispredictions, which trigger more future transient windows. This, the researchers explain, causes an overflow in the return stack buffer with an attacker-controlled target.

“The result of this insight is Inception, an attack that leaks arbitrary data from an unprivileged process on all AMD Zen CPUs,” they wrote.

In a video published alongside the disclosure, and included below, the Swiss team demonstrate this attack by leaking the root account hash from /etc/shadow on a Zen 4-based Ryzen 7700X CPU with all Spectre mitigations enabled.

You can find a more thorough explanation of Inception, including the researchers’ methodology in a paper here [PDF]. It was written by Daniël Trujillo, Johannes Wikner, and Kaveh Razavi, of ETH Zurich. They’ve also shared proof-of-concept exploit code here.

Source: Nearly every AMD CPU since 2017 vulnerable to Inception bug • The Register

‘We’re changing the clouds.’ An unintended test of geoengineering is fueling record ocean warmth

[…]

researchers are now waking up to another factor, one that could be filed under the category of unintended consequences: disappearing clouds known as ship tracks. Regulations imposed in 2020 by the United Nations’s International Maritime Organization (IMO) have cut ships’ sulfur pollution by more than 80% and improved air quality worldwide. The reduction has also lessened the effect of sulfate particles in seeding and brightening the distinctive low-lying, reflective clouds that follow in the wake of ships and help cool the planet. The 2020 IMO rule “is a big natural experiment,” says Duncan Watson-Parris, an atmospheric physicist at the Scripps Institution of Oceanography. “We’re changing the clouds.”

By dramatically reducing the number of ship tracks, the planet has warmed up faster, several new studies have found. That trend is magnified in the Atlantic, where maritime traffic is particularly dense. In the shipping corridors, the increased light represents a 50% boost to the warming effect of human carbon emissions. It’s as if the world suddenly lost the cooling effect from a fairly large volcanic eruption each year, says Michael Diamond, an atmospheric scientist at Florida State University.

The natural experiment created by the IMO rules is providing a rare opportunity for climate scientists to study a geoengineering scheme in action—although it is one that is working in the wrong direction. Indeed, one such strategy to slow global warming, called marine cloud brightening, would see ships inject salt particles back into the air, to make clouds more reflective. In Diamond’s view, the dramatic decline in ship tracks is clear evidence that humanity could cool off the planet significantly by brightening the clouds. “It suggests pretty strongly that if you wanted to do it on purpose, you could,” he says.

The influence of pollution on clouds remains one of the largest sources of uncertainty in how quickly the world will warm up, says Franziska Glassmeier, an atmospheric scientist at the Delft University of Technology. Progress on understanding these complex interactions has been slow. “Clouds are so variable,” Glassmeier says.

Some of the basic science is fairly well understood. Sulfate or salt particles seed clouds by creating nuclei for vapor to condense into droplets. The seeds also brighten existing clouds by creating smaller, more numerous droplets. The changes don’t stop there, says Robert Wood, an atmospheric scientist at the University of Washington. He notes that smaller droplets are less likely to merge with others, potentially suppressing rainfall. That would increase the size of clouds and add to their brightening effect. But modeling also suggests that bigger clouds are more likely to mix with dry air, which would reduce their reflectivity.

[…]

Source: ‘We’re changing the clouds.’ An unintended test of geoengineering is fueling record ocean warmth | Science | AAAS

The Fear Of AI and Entitled Cancel Culture Just Killed A Very Useful Tool: Prosecraft

I do understand why so many people, especially creative folks, are worried about AI and how it’s used. The future is quite unknown, and things are changing very rapidly, at a pace that can feel out of control. However, when concern and worry about new technologies and how they may impact things morphs into mob-inspiring fear, dumb things happen. I would much rather that when we look at new things, we take a more realistic approach to them, and look at ways we can keep the good parts of what they provide, while looking for ways to mitigate the downsides.

Hopefully without everyone going crazy in the meantime. Unfortunately, that’s not really the world we live in.

Last year, when everyone was focused on generative AI for images, we had Rob Sheridan on the podcast to talk about why it was important for creative people to figure out how to embrace the technology rather than fear it. The opening story of the recent NY Times profile of me was all about me in a group chat, trying to suggest to some very creative Hollywood folks how to embrace AI rather than simply raging against it. And I’ve already called out how folks rushing to copyright, thinking that will somehow “save” them from AI, are barking up the wrong tree.

But, in the meantime, the fear over AI is leading to some crazy and sometimes unfortunate outcomes. Benji Smith, who created what appears to be an absolutely amazing tool for writers, Shaxpir, also created what looked like an absolutely fascinating tool called Prosecraft, that had scanned and analyzed a whole bunch of books and would let you call up really useful data on books.

He created it years ago, based on an idea he had years earlier, trying to understand the length of various books (which he initially kept in a spreadsheet). As Smith himself describes in a blog post:

I heard a story on NPR about how Kurt Vonnegut invented an idea about the “shapes of stories” by counting happy and sad words. The University of Vermont “Computational Story Lab” published research papers about how this technique could show the major plot points and the “emotional story arc” of the Harry Potter novels (as well as many many other books).

So I tried it myself and found that I could plot a graph of the emotional ups and downs of any story. I added those new “sentiment analysis” tools to the prosecraft website too.

When I ran out of books on my own shelves, I looked to the internet for more text that I could analyze, and I used web crawlers to find more books. I wanted to be mindful of the diversity of different stories, so I tried to find books by authors of every race and gender, from every different cultural and political background, writing in every different genre and exploring all different kinds of themes. Fiction and nonfiction and philosophy and science and religion and culture and politics.

Somewhere out there on the internet, I thought to myself, there was a new author writing a horror or romance or fantasy novel, struggling for guidance about how long to write their stories, how to write more vivid prose, and how much “passive voice” was too much or too little.

I wanted to give those budding storytellers a suite of “lexicographic” tools that they could use, to compare their own writing with the writing of authors they admire. I’ve been working in the field of computational linguistics and machine learning for 20+ years, and I was always frustrated that the fancy tools were only accessible to big businesses and government spy agencies. I wanted to bring that magic to everyone.

Frankly, all of that sounds amazing. And amazingly useful. Even more amazing is that he built it, and it worked. It would produce useful analysis of books, such as this example from Alice’s Adventures in Wonderland:

And, it could also do further analysis like the following:

This is all quite interesting. It’s also the kind of thing that data scientists do on all kinds of work for useful purposes.

Smith built Prosecraft into Shaxpir, again, making it a more useful tool. But, on Monday, some authors on the internet found out about it and lost their shit, leading Smith to shut the whole project down.

There seems to be a lot of misunderstanding about all of this. Smith notes that he had researched the copyright issues and was sure he wasn’t violating anything, and he’s right. We’ve gone over this many times before. Scanning books is pretty clearly fair use. What you do with that later could violate copyright law, but I don’t see anything that Prosecraft did that comes anywhere even remotely close to violating copyright law.

But… some authors got pretty upset about all of it.

I’m still perplexed at what the complaint is here? You don’t need to “consent” for someone to analyze your book. You don’t need to “consent” to someone putting up statistics about their analysis of your book.

But, Zach’s tweet went viral with a bunch of folks ready to blow up anything that smacks of tech bro AI, and lots of authors started yelling at Smith.

The Gizmodo article has a ridiculously wrong “fair use” analysis, saying “Fair Use does not, by any stretch of the imagination, allow you to use an author’s entire copyrighted work without permission as a part of a data training program that feeds into your own ‘AI algorithm.’” Except… it almost certainly does? Again, we’ve gone through this with the Google Book scanning case, and the courts said that you can absolutely do that because it’s transformative.

It seems that what really tripped up people here was the “AI” part of it, and the fear that this was just another a VC funded “tech bro” exercise of building something to get rich by using the works of creatives. Except… none of that is accurate. As Smith explained in his blog post:

For what it’s worth, the prosecraft website has never generated any income. The Shaxpir desktop app is a labor of love, and during most of its lifetime, I’ve worked other jobs to pay the bills while trying to get the company off the ground and solve the technical challenges of scaling a startup with limited resources. We’ve never taken any VC money, and the whole company is a two-person operation just working our hardest to serve our small community of authors.

He also recognizes that the concerns about it being some “AI” thing are probably what upset people, but plenty of authors have found the tool super useful, and even added their own books:

I launched the prosecraft website in the summer of 2017, and I started showing it off to authors at writers conferences. The response was universally positive, and I incorporated the prosecraft analytic tools into the Shaxpir desktop application so that authors could privately run these analytics on their own works-in-progress (without ever sharing those analyses publicly, or even privately with us in our cloud).

I’ve spent thousands of hours working on this project, cleaning up and annotating text, organizing and tweaking things. A small handful of authors have even reached out to me, asking to have their books added to the website. I was grateful for their enthusiasm.

But in the meantime, “AI” became a thing.

And the arrival of AI on the scene has been tainted by early use-cases that allow anyone to create zero-effort impersonations of artists, cutting those creators out of their own creative process.

That’s not something I ever wanted to participate in.

Smith took the project down entirely because of that. He doesn’t want to get lumped in with other projects, and even though his project is almost certainly legal, he recognized that this was becoming an issue:

Today the community of authors has spoken out, and I’m listening. I care about you, and I hear your objections.

Your feelings are legitimate, and I hope you’ll accept my sincerest apologies. I care about stories. I care about publishing. I care about authors. I never meant to hurt anyone. I only hoped to make something that would be fun and useful and beautiful, for people like me out there struggling to tell their own stories.

I find all of this really unfortunate. Smith built something really cool, really amazing, that does not, in any way, infringe on anyone’s rights. I get the kneejerk reaction from some authors, who feared that this was some obnoxious project, but couldn’t they have taken 10 minutes to look at the details of what it was they were killing?

I know we live in an outrage era, where the immediate reaction is to turn the outrage meter up to 11. I’m certainly guilty of that at times myself. But this whole incident is just sad. It was an overreaction from the start, destroying what had been a clear labor of love and a useful project, through misleading and misguided attacks from authors.

Source: The Fear Of AI Just Killed A Very Useful Tool | Techdirt

Preservation Fail: Hasbro Wants Old ‘Transformers’ Games Re-Released, Except Activision Might Have Lost Them

And here we go again. we’ve been talking about how copyright has gotten in the way of cultural preservation generally for a while, and more specifically lately when it comes to the video game industry. The way this problem manifests itself is quite simple: video game publishers support the games they release for some period of time and then they stop. When they stop, depending on the type of game, it can make that game unavailable for legitimate purchase or use, either because the game is disappeared from retail and online stores, or because the servers needed to make them operational are taken offline. Meanwhile, copyright law prevents individuals and, in some cases, institutions from preserving and making those games available to the public, a la a library or museum would.

When you make these preservation arguments, one of the common retorts you get from the gaming industry and its apologists is that publishers already preserve these games for eventual re-release down the road, which is why they need to maintain their copyright protection on that content. We’ve pointed out failures to do so by the industry in the past, but the story about Hasbro wanting to re-release several older Transformers video games, but can’t, is about as perfect an example as I can find.

Released in June 2010, Transformers: War for Cybertron was a well-received third-person shooter that got an equally great sequel in 2012, Fall of Cybertron. (And then in 2014 we got Rise of Dark Spark, which wasn’t very good and was tied into the live-action films.) What made the first two games so memorable and beloved was that they told their own stories about the origins of popular characters like Megatron and Optimus Prime while featuring kick-ass combat that included the ability to transform into different vehicles. Sadly, in 2018, all of these Activision-published Transformers games (and several it commissioned from other developers) were yanked from digital stores, making them hard to acquire and play in 2023. It seems that Hasbro now wants that to change, suggesting the games could make a perfect fit for Xbox Game Pass, once Activision, uh…finds them.

You read that right: finds them. What does that mean? Well, when Hasbro came calling to Activision looking to see if this was a possibility, it devolved into Activision doing a theatrical production parody called Dude, Where’s My Hard Drive? It seems that these games may or may not exist on some piece of hardware, but Activision literally cannot find it. Or maybe not, as you’ll read below. There seems to be some confusion about what Activision can and cannot find.

And, yes, the mantra in the comments that pirate sites are essentially solving for this problem certainly applies here as well. So much so, in fact, that it sure sounds like Hasbro went that route to get what it needed for the toy design portion of this.

Interestingly, Activision’s lack of organization seems to have caused some headaches for Hasbro’s toy designers who are working on the Gamer Edition figures. The toy company explained that it had to load up the games on their original platforms and play through them to find specific details they wanted to recreate for the toys.

“For World of Cybertron we had to rip it ourselves, because [Activision] could not find it—they kept sending concept art instead, which we didn’t want,” explained Hasbro. “So we booted up an old computer and ripped them all out from there. Which was a learning experience and a long weekend, because we just wanted to get it right, so that’s why we did it like that.

What’s strange is that despite the above, Activision responded to initial reports of all this indicating that the headlines were false and it does have… code. Or something.

Hasbro itself then followed up apologizing for the confusion, also saying that it made an error in stating the games were “lost”. But what’s strange about all that, in addition to the work that Hasbro did circumventing having access to the actual games themselves, is the time delta it took for Activision to respond to all of this.

Activision has yet to confirm if it actually knows where the source code for the games is specifically located. I also would love to know why Activision waited so long to comment (the initial interview was posted on July 28) and why Hasbro claimed to not have access to key assets when developing its toys based on the games.

It’s also strange that Hasbro, which says it wants to put these games on Game Pass, hasn’t done so for years now. If the games aren’t lost, give ‘em to Hasbro, then?

Indeed. If this was all a misunderstanding, so be it. But if this was all pure misunderstanding, the rest of the circumstances surrounding this story don’t make a great deal of sense. At the very least, it sounds like some of the concern that these games could have simply been lost to the world is concerning and yet another data point for an industry that simply needs to do better when it comes to preservation efforts.

Source: Preservation Fail: Hasbro Wants Old ‘Transformers’ Games Re-Released, Except Activision Might Have Lost Them | Techdirt

Gravity Changes how it works at low acceleration shown by observations of widely seperated binary stars

A new study reports conclusive evidence for the breakdown of standard gravity in the low acceleration limit from a verifiable analysis of the orbital motions of long-period, widely separated, binary stars, usually referred to as wide binaries in astronomy and astrophysics.

The study carried out by Kyu-Hyun Chae, professor of physics and astronomy at Sejong University in Seoul, used up to 26,500 wide binaries within 650 (LY) observed by European Space Agency’s Gaia space telescope. The study was published in the 1 August 2023 issue of the Astrophysical Journal.

For a key improvement over other studies Chae’s study focused on calculating gravitational accelerations experienced by as a function of their separation or, equivalently the orbital period, by a Monte Carlo deprojection of observed sky-projected motions to the three-dimensional space.

Chae explains, “From the start it seemed clear to me that could be most directly and efficiently tested by calculating accelerations because itself is an acceleration. My recent research experiences with galactic rotation curves led me to this idea. Galactic disks and wide binaries share some similarity in their orbits, though wide binaries follow highly elongated orbits while hydrogen gas particles in a galactic disk follow nearly circular orbits.”

Also, unlike other studies Chae calibrated the occurrence rate of hidden nested inner binaries at a benchmark acceleration.

The study finds that when two stars orbit around with each other with accelerations lower than about one nanometer per second squared start to deviate from the prediction by Newton’s universal law of gravitation and Einstein’s general relativity.

For accelerations lower than about 0.1 nanometer per second squared, the observed acceleration is about 30 to 40% higher than the Newton-Einstein prediction. The significance is very high meeting the conventional criteria of 5 sigma for a scientific discovery. In a sample of 20,000 wide binaries within a distance limit of 650 LY two independent acceleration bins respectively show deviations of over 5 sigma significance in the same direction.

Because the observed accelerations stronger than about 10 nanometer per second squared agree well with the Newton-Einstein prediction from the same analysis, the observed boost of accelerations at lower accelerations is a mystery. What is intriguing is that this breakdown of the Newton-Einstein theory at accelerations weaker than about one nanometer per second squared was suggested 40 years ago by theoretical physicist Mordehai Milgrom at the Weizmann Institute in Israel in a new theoretical framework called modified Newtonian dynamics (MOND) or Milgromian dynamics in current usage.

Moreover, the boost factor of about 1.4 is correctly predicted by a MOND-type Lagrangian theory of gravity called AQUAL, proposed by Milgrom and the late physicist Jacob Bekenstein. What is remarkable is that the correct boost factor requires the external field effect from the Milky Way galaxy that is a unique prediction of MOND-type modified gravity. Thus, what the wide binary data show are not only the breakdown of Newtonian dynamics but also the manifestation of the external field effect of modified gravity.

On the results, Chae says, “It seems impossible that a conspiracy or unknown systematic can cause these acceleration-dependent breakdown of the standard gravity in agreement with AQUAL. I have examined all possible systematics as described in the rather long paper. The results are genuine. I foresee that the results will be confirmed and refined with better and larger data in the future. I have also released all my codes for the sake of transparency and to serve any interested researchers.”

Unlike galactic rotation curves in which the observed boosted accelerations can, in principle, be attributed to dark matter in the Newton-Einstein standard gravity, wide binary dynamics cannot be affected by it even if it existed. The standard gravity simply breaks down in the weak acceleration limit in accordance with the MOND framework.

Implications of wide binary dynamics are profound in astrophysics, theoretical physics, and cosmology. Anomalies in Mercury’s orbits observed in the nineteenth century eventually led to Einstein’s general relativity.

Now anomalies in wide binaries require a new theory extending general relativity to the low acceleration MOND limit. Despite all the successes of Newton’s gravity, general relativity is needed for relativistic gravitational phenomena such as black holes and gravitational waves. Likewise, despite all the successes of general relativity, a new theory is needed for MOND phenomena in the weak acceleration limit. The weak-acceleration catastrophe of gravity may have some similarity to the ultraviolet catastrophe of classical electrodynamics that led to quantum physics.

Wide binary anomalies are a disaster to the standard gravity and cosmology that rely on dark matter and dark energy concepts. Because gravity follows MOND, a large amount of dark matter in galaxies (and even in the universe) are no longer needed. This is also a big surprise to Chae who, like typical scientists, “believed in” until a few years ago.

A new revolution in physics seems now under way. Milgrom says, “Chae’s finding is a result of a very involved analysis of cutting-edge data, which, as far as I can judge, he has performed very meticulously and carefully. But for such a far-reaching finding—and it is indeed very far reaching—we require confirmation by independent analyses, preferably with better future data.”

“If this anomaly is confirmed as a breakdown of Newtonian dynamics, and especially if it indeed agrees with the most straightforward predictions of MOND, it will have enormous implications for astrophysics, cosmology, and for fundamental physics at large.”

Xavier Hernandez, professor at UNAM in Mexico who first suggested wide binary tests of gravity a decade ago, says, “It is exciting that the departure from Newtonian gravity that my group has claimed for some time has now been independently confirmed, and impressive that this departure has for the first time been correctly identified as accurately corresponding to a detailed MOND model. The unprecedented accuracy of the Gaia satellite, the large and meticulously selected sample Chae uses and his detailed analysis, make his results sufficiently robust to qualify as a discovery.”

Pavel Kroupa, professor at Bonn University and at Charles University in Prague, has come to the same conclusions concerning the law of gravitation. He says, “With this test on wide binaries as well as our tests on open star clusters nearby the sun, the data now compellingly imply that gravitation is Milgromian rather than Newtonian. The implications for all of astrophysics are immense.”

More information: Kyu-Hyun Chae, Breakdown of the Newton–Einstein Standard Gravity at Low Acceleration in Internal Dynamics of Wide Binary Stars, The Astrophysical Journal (2023). DOI: 10.3847/1538-4357/ace101

Source: Smoking-gun evidence for modified gravity at low acceleration from Gaia observations of wide binary stars

China floats rules for facial recognition technology – they are good and be great if the govt was bound by them too!

China has released draft regulations to govern the country’s facial recognition technology that include prohibitions on its use to analyze race or ethnicity.

According to the the Cyberspace Administration of China(CAC), the purpose is to “regulate the application of face recognition technology, protect the rights and interests of personal information and other personal and property rights, and maintain social order and public safety” as outlined by a smattering of data security, personal information, and network laws.

The draft rules, which are open for comments until September 7, include some vague directives not to use face recognition technology to disrupt social order, endanger national security, or infringe on the rights of individuals and organizations.

The rules also state that facial recognition tech must be used only when there is a specific purpose and sufficient necessity, strict protection measures are taken, and only when non-biometric measures won’t do.

It makes requirements to obtain consent before processing face information, except for cases where it’s not required, which The Reg assumes means for individuals such as prisoners and in instances of national security. Parental or guardian consent is needed for those under the age of 14.

Building managers can’t require its use to enter and exit property – they must provide alternative measures of verifying a personal identity for those who want it.

It also can’t be leaned into for “major personal interests” such as social assistance and real estate disposal. For that, manual verification of personal identity must be used with facial recognition used only as an auxiliary means of verifying personal identity.

And collecting images for internal management should only be done in a reasonably sized area.

In businesses like hotels, banks, airports, art galleries, and more, the tech should not be used to verify personal identity. If the individual chooses to link their identity to the image, they should be informed either verbally or in writing and provide consent.

Collecting images is also not allowed in private spaces like hotel rooms, public bathrooms, and changing rooms.

Furthermore, those using facial surveillance techniques must display reminder signs, and personal images along with identification information must also be kept confidential, and only anonymized data may be saved.

Under the draft regs, those that store face information of more than 10,000 people must register with a local branch of the CAC within 30 working days.

Most interesting, however, is Article 11, which, when translated from Chinese via automated tools, reads:

No organization or individual shall use face recognition technology to analyze personal race, ethnicity, religion, sensitive personal information such as beliefs, health status, social class, etc.

The CAC does not say if the Chinese Communist Party counts as an “organization.”

Human rights groups have credibly asserted that Uyghurs are routinely surveilled using facial recognition technology, in addition to being incarcerated, required to perform forced labor, re-educated to abandon their beliefs and cultural practices, and may even be subjected to sterilization campaigns.

Just last month, physical security monitoring org IPVM reported it came into possession of a contract between China-based Hikvision and Hainan Province’s Chengmai County for $6 million worth of cameras that could detect whether a person was ethnically Uyghur using minority recognition technology.

Hikvision denied the report and said it last provided such functionality in 2018.

Beyond facilitating identification of Uyghurs, it’s clear the cat is out of the bag when it comes to facial recognition technology in China by both government and businesses alike. Local police use it to track down criminals and its use feeds into China’s social credit system.

“‘Sky Net,’ a facial recognition system that can scan China’s population of about 1.4 billion people in a second, is being used in 16 Chinese cities and provinces to help police crackdown on criminals and improve security,” said state-sponsored media in 2018.

Regardless, the CAC said those violating the new draft rules once passed would be held to criminal and civil liability.

Source: China floats rules for facial recognition technology • The Register

Nuclear Fusion Scientists Successfully Recreate Net Energy Gain

[…] Reuters reports that scientists with the Lawrence Livermore National Laboratory’s National Ignition Facility in California repeated a fusion ignition reaction. The lab’s first breakthrough was announced by the U.S. Department of Energy in December. While the previous experiment produced net energy gain, a spokesperson from the lab told the outlet that this second experiment, conducted on July 30, produced an even higher energy yield. While the laboratory called the experiment a success, results from the test are still being analyzed.

[…]

While fusion reactions are a staple in physics, scientists previously had to grapple with the notion that they required more energy in than they produced, making the net energy gain in both reactions a noteworthy result. The Department of Energy revealed in its December announcement that the fusion test conducted by the laboratory at that time required 2 megajoules of energy while it produced 3 megajoules of energy. The previous fusion experiment conducted at the National Ignition Facility used 192 lasers focused on a peppercorn-sized target. Those lasers create temperatures as high as 100 million degrees Fahrenheit and pressures of over 100 billion Earth atmospheres in order to induce a fusion reaction in the target.

[…]

 

Source: Nuclear Fusion Scientists Successfully Recreate Net Energy Gain

AI listens to keyboards on video conferences – decodes passwords

[…] a new paper from the UK that shows how researchers trained an AI to decode keystrokes from noise on conference calls.

The researchers point out that people don’t expect sound-based exploits. The paper reads, “For example, when typing a password, people will regularly hide their screen but will do little to obfuscate their keyboard’s sound.”

The technique uses the same kind of attention network that makes models like ChatGPT so powerful. It seems to work well, as the paper claims a 97% peak accuracy over both a telephone or Zoom. In addition, where the model was wrong, it tended to be close, identifying an adjacent keystroke instead of the correct one. This would be easy to correct for in software, or even in your brain as infrequent as it is. If you see the sentence “Paris im the s[ring,” you can probably figure out what was really typed.

[…]

Source: Noisy Keyboards Sink Ships | Hackaday

North Korean hackers put backdoors in Russian hypersonic missile maker computers

Reuters found cyber-espionage teams linked to the North Korean government, which security researchers call ScarCruft and Lazarus, secretly installed stealthy digital backdoors into systems at NPO Mashinostroyeniya, a rocket design bureau based in Reutov, a small town on the outskirts of Moscow.

Reuters could not determine whether any data was taken during the intrusion or what information may have been viewed.

[…]

Source: North Korean hackers stole secrets of Russian hypersonic missile maker – EURACTIV.com

Scientists observe first evidence of ‘quantum superchemistry’ in the laboratory

A team from the University of Chicago has announced the first evidence for “quantum superchemistry”—a phenomenon where particles in the same quantum state undergo collective accelerated reactions. The effect had been predicted, but never observed in the laboratory.

[…]

Chin’s group is experienced with herding atoms into quantum states, but molecules are larger and much more complex than atoms—so the group had to invent new techniques to wrangle them.

In the experiments, the scientists cooled down cesium atoms and coaxed them into the same quantum state. Next, they watched as the atoms reacted to form molecules.

In ordinary chemistry, the would collide, and there’s a probability for each collision to form a molecule. However, predicts that atoms in a quantum state perform actions collectively instead.

[…]

One consequence is that the reaction happens faster than it would under ordinary conditions. In fact, the more atoms in the system, the faster the reaction happens.

Another consequence is that the final molecules share the same molecular state. Chin explained that the same molecules in different states can have different physical and —but there are times when you want to create a batch of molecules in a specific state. In traditional chemistry, you’re rolling the dice. “But with this technique, you can steer the molecules into an identical state,” he said.

[…]

More information: Zhendong Zhang et al, Many-body chemical reactions in a quantum degenerate gas, Nature Physics (2023). DOI: 10.1038/s41567-023-02139-8

Source: Scientists observe first evidence of ‘quantum superchemistry’ in the laboratory

What? AI-Generated Art Banned from Future Dungeons & Dragons Books After “Fan Uproar” (Or ~1600 tweets about it)

A Dungeons & Dragons expansion book included AI-generated artwork. Fans on Twitter spotted it before the book was even released (noting, among other things, a wolf with human feet). An embarrassed representative for Wizards of the Coast then tweeted out an announcement about new guidelines stating explicitly that “artists must refrain from using AI art generation as part of their creation process for developing D&D art.” GeekWire reports: The artist in question, Ilya Shkipin, is a California-based painter, illustrator, and operator of an NFT marketplace, who has worked on projects for Renton, Wash.-based Wizards of the Coast since 2014. Shkipin took to Twitter himself on Friday, and acknowledged in several now-deleted tweets that he’d used AI tools to “polish” several original illustrations and concept sketches. As of Saturday morning, Shkipin had taken down his original tweets and announced that the illustrations for Glory of the Giants are “going to be reworked…”

While the physical book won’t be out until August 15, the e-book is available now from Wizards’ D&D Beyond digital storefront.
Wizards of the Coast emphasized this won’t happen again. About this particular incident, they noted “We have worked with this artist since 2014 and he’s put years of work into books we all love. While we weren’t aware of the artist’s choice to use AI in the creation process for these commissioned pieces, we have discussed with him, and he will not use AI for Wizards’ work moving forward.”

GeekWire adds that the latest D&D video game, Baldur’s Gate 3, “went into its full launch period on Tuesday. Based on metrics such as its player population on Steam, BG3 has been an immediate success, with a high of over 709,000 people playing it concurrently on Saturday afternoon.”

Source: AI-Generated Art Banned from Future ‘Dungeons & Dragons’ Books After Fan Uproar – Slashdot

Really? 1600 tweets about this is considered an “uproar” and was enough to change policy into anti-AI? So if you actually look at the pictures, only the wolf with human feet was strange and the rest of the comments weren’t in my eyes. Welcome to life – we have AI’s now and people are going to use them. They are going to save artists loads of time and allow them to create really really cool stuff… like these pictures!

Come on Wizards of the Coast, don’t be luddites.

MIT Boffins Build Battery Alternative Out of Cement, Carbon Black, and Water

Long-time Slashdot reader KindMind shares a report from The Register: Researchers at MIT claim to have found a novel new way to store energy using nothing but cement, a bit of water, and powdered carbon black — a crystalline form of the element. The materials can be cleverly combined to create supercapacitors, which could in turn be used to build power-storing foundations of houses, roadways that could wirelessly charge vehicles, and serve as the foundation of wind turbines and other renewable energy systems — all while holding a surprising amount of energy, the team claims. According to a paper published in the Proceedings of the National Academy of Sciences, 45 cubic meters of the carbon-black-doped cement could have enough capacity to store 10 kilowatt-hours of energy — roughly the amount an average household uses in a day. A block of cement that size would measure about 3.5 meters per side and, depending on the size of the house, the block could theoretically store all the energy an off-grid home using renewables would need.” […]

Just three percent of the mixture has to be carbon black for the hardened cement to act as a supercapacitor, but the researchers found that a 10 percent carbon black mixture appears to be ideal. Beyond that ratio, the cement becomes less stable — not something you want in a building or foundation. The team notes that non-structural use could allow higher concentrations of carbon black, and thus higher energy storage capacity. The team has only built a tiny one-volt test platform using its carbon black mix, but has plans to scale up to supercapacitors the same size as a 12-volt automobile battery — and eventually to the 45 cubic meter block. Along with being used for energy storage, the mix could also be used to provide heat — by applying electricity to the conductive carbon network encased in the cement, MIT noted.
As Science magazine puts it, “Tesla’s Powerwall, a boxy, wall-mounted, lithium-ion battery, can power your home for half a day or so. But what if your home was the battery?”

Source: MIT Boffins Build Battery Alternative Out of Cement, Carbon Black, and Water – Slashdot

Will Browsers Be Required By Law To Stop You From Visiting Infringing Sites?

Mozilla’s Open Policy & Advocacy blog has news about a worrying proposal from the French government:

In a well-intentioned yet dangerous move to fight online fraud, France is on the verge of forcing browsers to create a dystopian technical capability. Article 6 (para II and III) of the SREN Bill would force browser providers to create the means to mandatorily block websites present on a government provided list.

The post explains why this is an extremely dangerous approach:

A world in which browsers can be forced to incorporate a list of banned websites at the software-level that simply do not open, either in a region or globally, is a worrying prospect that raises serious concerns around freedom of expression. If it successfully passes into law, the precedent this would set would make it much harder for browsers to reject such requests from other governments.

If a capability to block any site on a government blacklist were required by law to be built in to all browsers, then repressive governments would be given an enormously powerful tool. There would be no way around that censorship, short of hacking the browser code. That might be an option for open source coders, but it certainly won’t be for the vast majority of ordinary users. As the Mozilla post points out:

Such a move will overturn decades of established content moderation norms and provide a playbook for authoritarian governments that will easily negate the existence of censorship circumvention tools.

It is even worse than that. If such a capability to block any site were built in to browsers, it’s not just authoritarian governments that would be rubbing their hands with glee: the copyright industry would doubtless push for allegedly infringing sites to be included on the block list too. We know this, because it has already done it in the past, as discussed in Walled Culture the book (free digital versions).

Not many people now remember, but in 2004, BT (British Telecom) caused something of a storm when it created CleanFeed:

British Telecom has taken the unprecedented step of blocking all illegal child pornography websites in a crackdown on abuse online. The decision by Britain’s largest high-speed internet provider will lead to the first mass censorship of the web attempted in a Western democracy.

Here’s how it worked:

Subscribers to British Telecom’s internet services such as BTYahoo and BTInternet who attempt to access illegal sites will receive an error message as if the page was unavailable. BT will register the number of attempts but will not be able to record details of those accessing the sites.

The key justification for what the Guardian called “the first mass censorship of the web attempted in a Western democracy” was that it only blocked illegal child sexual abuse material Web sites. It was therefore an extreme situation requiring an exceptional solution. But seven years later, the copyright industry were able to convince a High Court judge to ignore that justification, and to take advantage of CleanFeed to block a site, Newzbin 2, that had nothing to do with child sexual abuse material, and therefore did not require exceptional solutions:

Justice Arnold ruled that BT must use its blocking technology CleanFeed – which is currently used to prevent access to websites featuring child sexual abuse – to block Newzbin 2.

Exactly the logic used by copyright companies to subvert CleanFeed could be used to co-opt the censorship capabilities of browsers with built-in Web blocking lists. As with CleanFeed, the copyright industry would doubtless argue that since the technology already exists, why not to apply it to tackling copyright infringement too?

That very real threat is another reason to fight this pernicious, misguided French proposal. Because if it is implemented, it will be very hard to stop it becoming yet another technology that the copyright world demands should be bent to its own selfish purposes.

Source: Will Browsers Be Required By Law To Stop You From Visiting Infringing Sites? | Techdirt

Very scary indeed

Academic Book About Emojis Can’t Include The Emojis It Talks About Because Of Copyright

Jieun Kiaer, an Oxford professor of Korean linguistics, recently published an academic book called Emoji Speak: Communications and Behaviours on Social Media. As you can tell from the name, it’s a book about emoji, and about how people communicate with them:

Exploring why and how emojis are born, and the different ways in which people use them, this book highlights the diversity of emoji speak. Presenting the results of empirical investigations with participants of British, Belgian, Chinese, French, Japanese, Jordanian, Korean, Singaporean, and Spanish backgrounds, it raises important questions around the complexity of emoji use.

Though emojis have become ubiquitous, their interpretation can be more challenging. What is humorous in one region, for example, might be considered inappropriate or insulting in another. Whilst emoji use can speed up our communication, we might also question whether they convey our emotions sufficiently. Moreover, far from belonging to the youth, people of all ages now use emoji speak, prompting Kiaer to consider the future of our communication in an increasingly digital world.

Sounds interesting enough, but as Goldman highlights with an image from the book, Kiaer was apparently unable to actually show examples of many of the emoji she was discussing due to copyright fears. While companies like Twitter and Google have offered up their own emoji sets under open licenses, not all of them have, and some of the specifics about the variations in how different companies represent different emoji apparently were key to the book.

So, for those, Kiaer actually hired an artist, Loli Kim, to draw similar emoji!

Note on Images of Emojis

The page reads as follows (with paragraph breaks added for readability):

Notes on Images of Emojis

Social media spaces are almost entirely copyright free. They do not follow the same rules as the offline world. For example, on Twitter you can retweet any tweet and add your own opinion. On Instagram, you can share any post and add stickers or text. On TikTok, you can even ‘duet’ a video to add your own video next to a pre-existing one. As much as each platform has its own rules and regulations, people are able to use and change existing material as they wish. Thinking about copyright brings to light barriers that exist between the online and offline worlds. You can use any emoji in your texts, tweets, posts and videos, but if you want to use them in the offline world, you may encounter a plethora of copyright issues.

In writing this book, I have learnt that online and offline exist upon two very different foundations. I originally planned to have plenty of images of emojis, stickers, and other multi-modal resources featured throughout this book, but I have been unable to for copyright reasons. In this moment, I realized how difficult it is to move emojis from the online world into the offline world.

Even though I am writing this book about emojis and their significance in our lives, I cannot use images of them in even an academic book. Were I writing a tweet or Instagram post, however, I would likely have no problem. Throughout this book, I stress that emoji speak in online spaces is a grassroots movement in which there are no linguistic authorities and corporations have little power to influence which emojis we use. Comparatively, in offline spaces, big corporations take ownership of our emoji speak, much like linguistic authorities dictate how we should write and speak properly.

This sounds like something out of a science fiction story, but it is an important fact of which to be aware. While the boundaries between our online and offline words may be blurring, barriers do still exist between them. For this reason, I have had to use an artist’s interpretation of the images that I originally had in mind for this book. Links to the original images have been provided as endnotes, in case readers would like to see them.

Just… incredible. Now, my first reaction to this is that using the emoji and stickers and whatnot in the book seems like a very clear fair use situation. But… that requires a publisher willing to take up the fight (and an insurance company behind the publisher willing to finance that fight). And, that often doesn’t happen. Publishers are notoriously averse to supporting fair use, because they don’t want to get sued.

But, really, this just ends up highlighting (once again) the absolute ridiculousness of copyright in the modern world. No one in their right mind would think that a book about emoji is somehow harming the market for whatever emoji or stickers the professor wished to include. Yet, due to the nature of copyright, here we are. With an academic book about emoji that can’t even include the emoji being spoken about.

Source: Academic Book About Emojis Can’t Include The Emojis It Talks About Because Of Copyright | Techdirt