The Linkielist

Linking ideas with the world

The Linkielist

About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

NASA Demonstrates Software ‘Brains’ Shared Across Satellite Swarms

[…] Distributed Spacecraft Autonomy (DSA), allows individual spacecraft to make independent decisions while collaborating with each other to achieve common goals – all without human input.

NASA researchers have achieved multiple firsts in tests of such swarm technology as part of the agency’s DSA project. Managed at NASA’s Ames Research Center in California’s Silicon Valley, the DSA project develops software tools critical for future autonomous, distributed, and intelligent swarms that will need to interact with each other to achieve complex mission objectives.

[…]

Distributed space missions rely on interactions between multiple spacecraft to achieve mission goals. Such missions can deliver better data to researchers and ensure continuous availability of critical spacecraft systems.

[…]

Distributing autonomy across a group of interacting spacecraft allows for all spacecraft in a swarm to make decisions and is resistant to individual spacecraft failures.

The DSA team advanced swarm technology through two main efforts: the development of software for small spacecraft that was demonstrated in space during NASA’s Starling mission, which involved four CubeSat satellites operating as a swarm to test autonomous collaboration and operation with minimal human operation, and a scalability study of a simulated spacecraft swarm in a virtual lunar orbit.

Experimenting With DSA in Low Earth Orbit

The team gave Starling a challenging job: a fast-paced study of Earth’s ionosphere – where Earth’s atmosphere meets space – to show the swarm’s ability to collaborate and optimize science observations. The swarm decided what science to do on their own with no pre-programmed science observations from ground operators.

“We did not tell the spacecraft how to do their science,” said Adams. “The DSA team figured out what science Starling did only after the experiment was completed. That has never been done before and it’s very exciting!”

The accomplishments of DSA onboard Starling include the first fully distributed autonomous operation of multiple spacecraft, the first use of space-to-space communications to autonomously share status information between multiple spacecraft, the first demonstration of fully distributed reactive operations onboard multiple spacecraft, the first use of a general-purpose automated reasoning system onboard a spacecraft, and the first use of fully distributed automated planning onboard multiple spacecraft.

During the demonstration, which took place between August 2023 and May 2024, Starling’s swarm of spacecraft received GPS signals that pass through the ionosphere and reveal interesting – often fleeting – features for the swarm to focus on. Because the spacecraft constantly change position relative to each other, the GPS satellites, and the ionospheric environment, they needed to exchange information rapidly to stay on task.

Each Starling satellite analyzed and acted on its best results individually. When new information reached each spacecraft, new observation and action plans were analyzed, continuously enabling the swarm to adapt quickly to changing situations.

[…]

The DSA lunar Position, Navigation, and Timing study demonstrated scalability of the swarm in a simulated environment. Over a two-year period, the team ran close to one hundred tests of more complex coordination between multiple spacecraft computers in both low- and high-altitude lunar orbit and showed that a swarm of up to 60 spacecraft is feasible.

The team is further developing DSA’s capabilities to allow mission operators to interact with even larger swarms – hundreds of spacecraft – as a single entity.

[…]

Source: NASA Demonstrates Software ‘Brains’ Shared Across Satellite Swarms   – NASA

Unions Sue to Block Elon Musk’s Access to Americans’ Tax and Benefits Records

A coalition of labor organizations representing federal workers and retirees has sued the Department of the Treasury to block it from giving the newly created Department of Government Efficiency, controlled by Elon Musk, access to the federal government’s sensitive payment systems.

After forcing out a security official who opposed the move, Treasury Secretary Scott Bessent granted DOGE workers access to the system last week, according to The New York Times. Despite its name, DOGE is not a government department but rather an ad-hoc group formed by President Trump purportedly tasked with cutting government spending.

The labor organizations behind the lawsuit filed Monday argue that Bessent broke federal privacy and tax confidentiality laws by giving unauthorized DOGE workers, including people like Musk who are not government employees, the ability to view the private information of anyone who pays taxes or receives money from federal agencies.

With access to the Treasury systems, DOGE representatives can potentially view the names, social security numbers, birth dates, mailing addresses, email addresses, and bank information of tens of millions of people who receive tax refunds, social security and disability payments, veterans benefits, or salaries from the federal government, according to the lawsuit.

“The scale of the intrusion into individuals’ privacy is massive and unprecedented,” according to the complaint filed by the Alliance for Retired Americans, the American Federation of Government Employees, and the Service Employees International Union.

[…]

In their lawsuit, the labor organizations argue that federal law prohibits the disclosure of taxpayer information to anyone except Treasury employees who require it for their official duties unless the disclosure is authorized by a specific law, which DOGE’s access to the system is not. DOGE’s access also violates the Privacy Act of 1974, which prohibits disclosure of personal information to unauthorized people and lays out strict procedures for changing those authorizations, which the Trump administration has not followed, according to the suit.

The plaintiffs have asked the Washington, D.C. district court to grant an injunction preventing unauthorized people from accessing the payment systems and to rule the Treasury’s actions unlawful.

Source: Unions Sue to Block Elon Musk’s Access to Americans’ Tax and Benefits Records

Apple chips can be hacked to leak secrets from Gmail, iCloud, and more in a browser

Apple-designed chips powering Macs, iPhones, and iPads contain two newly discovered vulnerabilities that leak credit card information, locations, and other sensitive data from the Chrome and Safari browsers as they visit sites such as iCloud Calendar, Google Maps, and Proton Mail.

The vulnerabilities, affecting the CPUs in later generations of Apple A- and M-series chip sets, open them to side channel attacks, a class of exploit that infers secrets by measuring manifestations such as timing, sound, and power consumption. Both side channels are the result of the chips’ use of speculative execution, a performance optimization that improves speed by predicting the control flow the CPUs should take and following that path, rather than the instruction order in the program.

A new direction

The Apple silicon affected takes speculative execution in new directions. Besides predicting control flow CPUs should take, it also predicts the data flow, such as which memory address to load from and what value will be returned from memory.

The most powerful of the two side-channel attacks is named FLOP. It exploits a form of speculative execution implemented in the chips’ load value predictor (LVP), which predicts the contents of memory when they’re not immediately available. By inducing the LVP to forward values from malformed data, an attacker can read memory contents that would normally be off-limits. The attack can be leveraged to steal a target’s location history from Google Maps, inbox content from Proton Mail, and events stored in iCloud Calendar.

SLAP, meanwhile, abuses the load address predictor (LAP). Whereas LVP predicts the values of memory content, LAP predicts the memory locations where instruction data can be accessed. SLAP forces the LAP to predict the wrong memory addresses. Specifically, the value at an older load instruction’s predicted address is forwarded to younger arbitrary instructions. When Safari has one tab open on a targeted website such as Gmail, and another open tab on an attacker site, the latter can access sensitive strings of JavaScript code of the former, making it possible to read email contents.

“There are hardware and software measures to ensure that two open webpages are isolated from each other, preventing one of them from (maliciously) reading the other’s contents,” the researchers wrote on an informational site describing the attacks and hosting the academic papers for each one. “SLAP and FLOP break these protections, allowing attacker pages to read sensitive login-protected data from target webpages. In our work, we show that this data ranges from location history to credit card information.”

[…]

The following Apple devices are affected by one or both of the attacks:

• All Mac laptops from 2022–present (MacBook Air, MacBook Pro)
• All Mac desktops from 2023–present (Mac Mini, iMac, Mac Studio, Mac Pro)
• All iPad Pro, Air, and Mini models from September 2021–present (Pro 6th and 7th generation, Air 6th gen., Mini 6th gen.)
• All iPhones from September 2021–present (All 13, 14, 15, and 16 models, SE 3rd gen.)

[…]

Source: Apple chips can be hacked to leak secrets from Gmail, iCloud, and more – Ars Technica

AI-assisted works can get finally copyright with enough human creativity, says US copyright office

Artists can copyright works they made with the help of artificial intelligence, according to a new report by the U.S. Copyright Office that could further clear the way for the use of AI tools in Hollywood, the music industry and other creative fields.

The nation’s copyright office, which sits in the Library of Congress and is not part of the executive branch, receives about half a million copyright applications per year covering millions of individual works. It has increasingly been asked to register works that are AI-generated.

And while many of those decisions are made on a case-by-case basis, the report issued Wednesday clarifies the office’s approach as one based on what the top U.S. copyright official describes as the “centrality of human creativity” in authoring a work that warrants copyright protections.

“Where that creativity is expressed through the use of AI systems, it continues to enjoy protection,” said a statement from Register of Copyrights Shira Perlmutter, who directs the office.

An AI-assisted work could be copyrightable if an artist’s handiwork is perceptible. A human adapting an AI-generated output with “creative arrangements or modifications” could also make it fall under copyright protections.

[…]

Source: AI-assisted works can get copyright with enough human creativity, says US copyright office | AP News

Astronomers Call for Global Ban on Space Advertising Before It’s Too Late

In a statement adopted in October 2024, the American Astronomical Society declared that humankind’s scientific understanding of the universe is under threat from space activities, including the proliferation of satellite constellations, space debris, and radio- and electromagnetic interference. Of note is the potential for a space-based eyesore: giant billboards hanging out in low Earth orbit.

“It is the position of the American Astronomical Society that obtrusive space advertising should be prohibited by appropriate international convention, treaty, or law,” the statement read.

Congress already prohibits domestic launches of any “payload containing any material to be used for the purposes of obtrusive space advertising,” in which obtrusive space advertising is defined as “advertising in outer space that is capable of being recognized by a human being on the surface of the Earth without the aid of a telescope or other technological device.”

“The US federal ban on obtrusive space advertising is a critical bulwark against an insidious fouling of the natural sky by private interests,” said James Lowenthal, an astronomer at Smith College and member of the AAS’ Committee for the Protection of Astronomy and the Space Environment (COMPASSE), in an email to Gizmodo. “That ban recognizes that the sky belongs to everyone, and must be protected for all humans now and in the future.”

“But the ban applies only to US launches; other countries could approve launches of ‘space billboards’ from their soil that would be visible from around the world,” Lowenthal added. “That’s why an international ban is critical.”

[…]

Source: Astronomers Call for Global Ban on Space Advertising Before It’s Too Late

WhatsApp says journalists and civil society members were targets of Israeli spyware

Nearly 100 journalists and other members of civil society using WhatsApp, the popular messaging app owned by Meta, were targeted by spyware owned by Paragon Solutions, an Israeli maker of hacking software, the company alleged on Friday.

The journalists and other civil society members were being alerted of a possible breach of their devices, with WhatsApp telling the Guardian it had “high confidence” that the 90 users in question had been targeted and “possibly compromised”.

It is not clear who was behind the attack. Like other spyware makers, Paragon’s hacking software is used by government clients and WhatsApp said it had not been able to identify the clients who ordered the alleged attacks.

Experts said the targeting was a “zero-click” attack, which means targets would not have had to click on any malicious links to be infected.

WhatsApp declined to disclose where the journalists and members of civil society were based, including whether they were based in the US.

Paragon has a US office in Chantilly, Virginia. The company has faced recent scrutiny after Wired magazine in October reported that it had entered into a $2m contract with the US Immigration and Customs Enforcement’s homeland security investigations division.

[…]

A person close to the company told the Guardian that Paragon had 35 government customers, that all of them could be considered democratic, and that Paragon did not do business with countries, including some democracies, that have previously been accused of abusing spyware. The person said that included Greece, Poland, Hungary, Mexico and India.

Paragon’s spyware is known as Graphite and has capabilities that are comparable to NSO Group’s Pegasus spyware. Once a phone is infected with Graphite, the operator of the spyware has total access to the phone, including being able to read messages that are sent via encrypted applications like WhatsApp and Signal.

The company, which was founded by the former Israeli prime minister Ehud Barak, has been the subject of media reports in Israel recently, after it was reported that the group was sold to a US private equity firm, AE Industrial Partners, for $900m.

[…]

Source: WhatsApp says journalists and civil society members were targets of Israeli spyware | WhatsApp | The Guardian

US healthcare provider data breach impacts 1 million patients

Community Health Center (CHC), a leading Connecticut healthcare provider, is notifying over 1 million patients of a data breach that impacted their personal and health data.

The non-profit organization provides primary medical, dental, and mental health services to more than 145,000 active patients.

CHC said in a Thursday filing with Maine’s attorney general that unknown attackers gained access to its network in mid-October 2024, a breach discovered more than two months later, on January 2, 2025.

While the threat actors stole files containing patients’ personal and health information belonging to 1,060,936 individuals, the healthcare organization says they didn’t encrypt any compromised systems and that the security breach didn’t impact its operations.

[…]

Depending on the affected patient, the attackers stole a combination of:

  • personal (names, dates of birth, addresses, phone numbers, emails, Social Security numbers) or
  • health information (medical diagnoses, treatment details, test results, and health insurance.

A CHC spokesperson was not immediately available when BleepingComputer reached out for more details on the incident.

While CHC said the hackers didn’t encrypt any of its systems, more ransomware operations have switched tactics to become data theft extortion groups in recent years.

[…]

In response to this surge of massive healthcare security breaches, the U.S. Department of Health and Human Services (HHS) proposed updates to HIPAA (short for Health Insurance Portability and Accountability Act of 1996) in late December to secure patients’ health data.

Source: US healthcare provider data breach impacts 1 million patients

Boom! The XB-1 Demonstrator Jet Has Gone Supersonic

Boom Supersonic’s XB-1 demonstrator has broken the sound barrier, marking a major milestone in the effort that hopes to lead to a larger 55-seat supersonic airliner design known as Overture. Overall, the program could have significant implications not only for commercial aviation but also for the military.

Boom Supersonic’s XB-1 demonstrator eases past the sound barrier for the first time, going supersonic just over 11 minutes into its sortie today. YouTube screencap

The aircraft was flown to a speed of Mach 1.1 by former U.S. Navy aviator and Boom test pilot Tristan “Geppetto” Brandenburg, from the Mojave Air & Space Port, California. For the majority of its flight, the XB-1 was accompanied by two other supersonic jets, an ATAC Mirage F1 flown by A.J. “Face” McFarland, serving as primary safety chase, and a T-38 Talon performing photo chase duties. During the flight, the XB-1 entered the supersonic realm three times, landing safely at Mojave after a flight of a little over 30 minutes duration.

[…]

Ultimately, XB-1 is expected to have a top speed of around Mach 2.2 (1,687.99 miles per hour).

The XB-1, also known as the “Baby Boom,” is a one-third-scale technology demonstrator for the Overture. It made its first flight at Mojave on March 22, 2024, as you can read about here. During that flight, the XB-1 was flown at speeds up to 238 knots (273 mph, or Mach 0.355), achieving an altitude of 7,120 feet. On that occasion, Chief Test Pilot Bill “Doc” Shoemaker was at the controls, while the flight was monitored by “Geppetto” Brandenburg, flying a T-38 Talon chase aircraft.

[…]

While we have outlined the key aspects of the XB-1 in the past, the aircraft is 62.6 feet long and its elongated delta-wing planform has a wingspan of 21 feet. It makes extensive use of sophisticated technologies, including carbon-fiber composites, advanced avionics, and digitally optimized aerodynamics.

The XB-1 during an earlier test flight. Boom Supersonic

It also has an unusual propulsion system to propel it into the supersonic regime. This comprises three General Electric J85-15 turbojets, which together provide more than 12,000 pounds of thrust. The widely used J85 also powers, among others, the Northrop F-5 and the T-38. Since the XB-1 was rolled out, another three-engined aircraft has broken cover, the Chinese advanced tailless combat aircraft tentatively known as the J-36.

Compared to the XB-1, the Overture will be 201 feet long and is planned to achieve a cruising speed of Mach 1.7 (1,304 miles per hour) and a maximum speed of Mach 2.2. The company anticipates it will have a maximum range of 4,500 nautical miles.

A rendering of Boom Supersonic’s Overture airliner. Boom Supersonic

Achieving the Mach 1 mark is a huge achievement for the company and an important statement of intent for the future Overture supersonic airliner.

Aimed to make supersonic travel more affordable to greater numbers of travelers — a goal in which no other operator has succeeded in the past — the Overture is planned to carry a total of 64-80 passengers. Intended to drastically shorten the duration of transoceanic routes, the aircraft is “designed … to be profitable for airlines at fares similar to first and business class,” the company’s website notes.

[…]

[…]

Source: Boom! The XB-1 Demonstrator Jet Has Gone Supersonic

Pebble Founder Is Bringing the Smartwatch Back as Google Open-Sources Its Software

There’s some good news to share for Pebble fans: The no-frills smartwatch is making a comeback. The Verge spoke to Pebble founder Eric Migicovsky today, who says he was able to convince Google to open-source the smartwatch’s operating system. Migicovsky is in the early stages of prototyping a new watch and spinning up a company again under a to-be-announced new name.

Founded back in 2012, Pebble was initially funded on Kickstarter and created smartwatches with e-ink displays that nailed the basics. They could display notifications, let users control their music, and last 5-7 days on a charge thanks to their displays that are akin to what you find on a Kindle. The watches came in at affordable prices too, and they could work across both iOS and Android.

[…]

Fans of Pebble will be happy to know that whatever new smartwatch Migicovsky releases, it will be almost identical to what came before. “We’re building a spiritual, not successor, but clone of Pebble,” he says, “because there’s not that much I actually want to change.” Migicovsky plans to keep the software open-source and allow anyone to customize it for their watches. “There’s going to be the ability for anyone who wants to, to take Pebble source code, compile it, run it on their Pebbles, build new Pebbles, build new watches. They could even use it in random other hardware. Who knows what people can do with it now?”

And of course, this time around Migicovsky is using his own capital to grow the company in a sustainable way. After leaving Pebble, he started a messaging startup called Beeper, which was acquired by WordPress developer Automattic. Migicovsky has also served as an investor at Y-Combinator.

It is unclear when Migicovsky’s first watch may be available, but updates will be shared at rePebble.com.

Source: Pebble Founder Is Bringing the Smartwatch Back as Google Open-Sources Its Software

Phone Metadata Suddenly Not So ‘Harmless’ When It’s The FBI’s Data Being Harvested

[…] While trying to fend off attacks on Section 215 collections (most of which are governed [in the loosest sense of the word] by the Third Party Doctrine), the NSA and its domestic-facing remora, the FBI, insisted collecting and storing massive amounts of phone metadata was no more a constitutional violation than it was a privacy violation.

Suddenly — thanks to the ongoing, massive compromising of major US telecom firms by Chinese state-sanctioned hackers — the FBI is getting hot and bothered about the bulk collection of its own phone metadata by (gasp!) a government agency. (h/t Kevin Collier on Bluesky)

FBI leaders have warned that they believe hackers who broke into AT&T Inc.’s system last year stole months of their agents’ call and text logs, setting off a race within the bureau to protect the identities of confidential informants, a document reviewed by Bloomberg News shows.

[…]

The data was believed to include agents’ mobile phone numbers and the numbers with which they called and texted, the document shows. Records for calls and texts that weren’t on the AT&T network, such as through encrypted messaging apps, weren’t part of the stolen data.

The agency (quite correctly!) believes the metadata could be used to identify agents, as well as their contacts and confidential sources. Of course it can.

[…]

The issue, of course, is that the Intelligence Community consistently downplayed this exact aspect of the bulk collection, claiming it was no more intrusive than scanning every piece of domestic mail (!) or harvesting millions of credit card records just because the Fourth Amendment (as interpreted by the Supreme Court) doesn’t say the government can’t.

There are real risks to real people who are affected by hacks like these. The same thing applies when the US government does it. It’s not just a bunch of data that’s mostly useless. Harvesting metadata in bulk allows the US government to do the same thing Chinese hackers are doing with it: identifying individuals, sussing out their personal networks, and building from that to turn numbers into adversarial actions — whether it’s the arrest of suspected terrorists or the further compromising of US government agents by hostile foreign forces.

The takeaway isn’t the inherent irony. It’s that the FBI and NSA spent years pretending the fears expressed by activists and legislators were overblown. Officials repeatedly claimed the information was of almost zero utility, despite mounting several efforts to protect this collection from being shut down by the federal government. In the end, the phone metadata program (at least as it applies to landlines) was terminated. But there’s more than a hint of egregious hypocrisy in the FBI’s sudden concern about how much can be revealed by “just” metadata.

Source: Phone Metadata Suddenly Not So ‘Harmless’ When It’s The FBI’s Data Being Harvested | Techdirt

Trump Disbands Cybersecurity Board Investigating Worst Hack in US History: Massive Chinese Phone System Invasion

[…] We’re still nowhere near understanding just how bad the Chinese hack of our phone system was. The incident that was only discovered last fall involved the Chinese hacking group Salt Typhoon, which used the US’s CALEA phone wiretapping system as a backdoor to gain incredible, unprecedented access to much of the US’s phone system “for months or longer.”

As details come out, the extent of the hackers’ access has become increasingly alarming. It is reasonable to call it the worst hack in US history.

Soon after it was discovered, Homeland Security tasked the Cyber Safety Review Board (CSRB) to lead an investigation into the hack to uncover what allowed it to happen and assess how bad it really was. The CSRB was established by Joe Biden to improve the government’s cybersecurity in the face of global cybersecurity attacks on our infrastructure and was made up of a mix of government and private sector cybersecurity experts.

And one of the first things Donald Trump did upon retaking the presidency was to dismantle the board, along with all other DHS Advisory Committees.

It’s one thing to say the new president should get to pick new members for these advisory boards, but it’s another thing altogether to just summarily dismiss the very board that is in the middle of investigating this hugely impactful hack of our telephone systems in a way that isn’t yet fully understood.

Just before the presidential switch, the Biden administration had announced sanctions against a Chinese front corporation that was connected to the hack. And while the details are still sparse, all indications are that this was a massive and damaging attack on critical US infrastructure.

And one of Trump’s moves is to disband the group of experts who was trying to get to the bottom of what happened.

This seems… bad?

Cybersecurity researcher Kevin Beaumont said on the social media platform Bluesky that the move would give Microsoft a “free pass,” referring to the CSRB’s critical report of the tech giant — and Beaumont’s former employer — over its handling of a prior Chinese hacker breach.

Jake Williams, faculty at IANS Research, went even further on the same website: “We should have been putting more resources into the CSRB, not dismantling it,”he wrote. “There’s zero doubt that killing the CSRB [would] hurt national security.”

While some have speculated that this move is an attempt to cover up the extent of the breach or even deliberately assist the Chinese, a more likely explanation is simple incompetence[…]

Source: Trump Disbands Cybersecurity Board Investigating Massive Chinese Phone System Hack | Techdirt

Circle to Search now offers one-tap actions for phone numbers, emails and URLs

[…] As a reminder, Circle to Search is an AI-powered feature Google released at the start of last year. You can access it by long-pressing your phone’s home button and then circling something with your finger. At its most basic, the feature is a way to use Google Search from anywhere on your phone, with no need to switch between apps. It’s particularly useful if you want to conduct an image search since you don’t need to take a screenshot or describe what you’re looking at to Google.

As for those enhancements I mentioned, Google is adding one-tap actions for phone numbers, email addresses and URLs, meaning if Circle to Search detects those, it will allow you to call, email or visit a website with a single tap. Again, there’s no need to switch between apps to interact with those elements.[…]

Source: Circle to Search now offers one-tap actions for phone numbers, emails and URLs

Subaru Security Flaws Exposed Its System for Tracking, remote controlling Millions of Cars

About a year ago, security researcher Sam Curry bought his mother a Subaru, on the condition that, at some point in the near future, she let him hack it.

It took Curry until last November, when he was home for Thanksgiving, to begin examining the 2023 Impreza’s internet-connected features and start looking for ways to exploit them. Sure enough, he and a researcher working with him online, Shubham Shah, soon discovered vulnerabilities in a Subaru web portal that let them hijack the ability to unlock the car, honk its horn, and start its ignition, reassigning control of those features to any phone or computer they chose.

Most disturbing for Curry, though, was that they found they could also track the Subaru’s location—not merely where it was at the moment but also where it had been for the entire year that his mother had owned it. The map of the car’s whereabouts was so accurate and detailed, Curry says, that he was able to see her doctor visits, the homes of the friends she visited, even which exact parking space his mother parked in every time she went to church.

Location Point Neighborhood Chart and Plot

A year of location data for Sam Curry’s mother’s 2023 Subaru Impreza that Curry and Shah were able to access in Subaru’s employee admin portal thanks to its security vulnerabilities.

Screenshot Courtesy of Sam Curry

“You can retrieve at least a year’s worth of location history for the car, where it’s pinged precisely, sometimes multiple times a day,” Curry says. “Whether somebody’s cheating on their wife or getting an abortion or part of some political group, there are a million scenarios where you could weaponize this against someone.”

Curry and Shah today revealed in a blog post their method for hacking and tracking millions of Subarus, which they believe would have allowed hackers to target any of the company’s vehicles equipped with its digital features known as Starlink in the US, Canada, or Japan. Vulnerabilities they found in a Subaru website intended for the company’s staff allowed them to hijack an employee’s account to both reassign control of cars’ Starlink features and also access all the vehicle location data available to employees, including the car’s location every time its engine started, as shown in their video below.

Curry and Shah reported their findings to Subaru in late November, and Subaru quickly patched its Starlink security flaws. But the researchers warn that the Subaru web vulnerabilities are just the latest in a long series of similar web-based flaws they and other security researchers working with them have found that have affected well over a dozen carmakers, including Acura, Genesis, Honda, Hyundai, Infiniti, Kia, Toyota, and many others. There’s little doubt, they say, that similarly serious hackable bugs exist in other auto companies’ web tools that have yet to be discovered.

[…]

Last summer, Curry and another researcher, Neiko Rivera, demonstrated to WIRED that they could pull off a similar trick with any of millions of vehicles sold by Kia. Over the prior two years, a larger group of researchers, of which Curry and Shah are a part, discovered web-based security vulnerabilities that affected cars sold by Acura, BMW, Ferrari, Genesis, Honda, Hyundai, Infiniti, Mercedes-Benz, Nissan, Rolls Royce, and Toyota.

[…]

In December, information a whistleblower provided to the German hacker collective the Chaos Computer Computer and Der Spiegel revealed that Cariad, a software company that partners with Volkswagen, had left detailed location data for 800,000 electric vehicles publicly exposed online. Privacy researchers at the Mozilla Foundation in September warned in a report that “modern cars are a privacy nightmare,” noting that 92 percent give car owners little to no control over the data they collect, and 84 percent reserve the right to sell or share your information. (Subaru tells WIRED that it “does not sell location data.”)

“While we worried that our doorbells and watches that connect to the internet might be spying on us, car brands quietly entered the data business by turning their vehicles into powerful data-gobbling machines,” Mozilla’s report reads.

[…]

Source: Subaru Security Flaws Exposed Its System for Tracking Millions of Cars | WIRED

Magic packet Backdoor found on Juniper VPN routers

Someone has been quietly backdooring selected Juniper routers around the world in key sectors including semiconductor, energy, and manufacturing, since at least mid-2023.

The devices were infected with what appears to be a variant of cd00r, a publicly available “invisible backdoor” designed to operate stealthily on a victim’s machine by monitoring network traffic for specific conditions before activating.

It’s not yet publicly known how the snoops gained sufficient access to certain organizations’ Junos OS equipment to plant the backdoor, which gives them remote control over the networking gear. What we do know is that about half of the devices have been configured as VPN gateways.

Once injected, the backdoor, dubbed J-magic by Black Lotus Labs this week, resides in memory only and passively waits for one of five possible network packets to arrive. When one of those magic packet sequences is received by the machine, a connection is established with the sender, and a followup challenge is initiated by the backdoor. If the sender passes the test, they get command-line access to the box to commandeer it.

As Black Lotus Labs explained in this research note on Thursday: “Once that challenge is complete, J-Magic establishes a reverse shell on the local file system, allowing the operators to control the device, steal data, or deploy malicious software.”

While it’s not the first-ever discovered magic packet [PDF] malware, the team wrote, “the combination of targeting Junos OS routers that serve as a VPN gateway and deploying a passive listening in-memory-only agent, makes this an interesting confluence of tradecraft worthy of further observation.”

[…]

The malware creates an eBPF filter to monitor traffic to a specified network interface and port, and waits until it receives any of five specifically crafted packets from the outside world. If one of these magic packets – described in the lab’s report – shows up, the backdoor connects to whoever sent the magic packet using SSL; sends a random, five-character-long alphanumeric string encrypted using a hardcoded public RSA key to the sender; and if the sender can decrypt the string using the private half of the key pair and send it back to the backdoor to verify, the malware will start accepting commands via the connection to run on the box.

[…]

These victims span the globe, with the researchers documenting companies in the US, UK, Norway, the Netherlands, Russia, Armenia, Brazil, and Colombia. They included a fiber optics firm, a solar panel maker, manufacturing companies including two that build or lease heavy machinery, and one that makes boats and ferries, plus energy, technology, and semiconductor firms.

While most of the targeted devices were Juniper routers acting as VPN gateways, a more limited set of targeted IP addresses had an exposed NETCONF port, which is commonly used to help automate router configuration information and management.

This suggests the routers are part of a larger, managed fleet such as those in a network service provider, the researchers note.

[…]

Source: Mysterious backdoor found on select Juniper routers • The Register

F-35 AI-Enabled Drone Controller Capability Successfully Demonstrated

Lockheed Martin says the stealthy F-35 Joint Strike Fighter now has a firmly demonstrated ability to act as an in-flight ‘quarterback’ for advanced drones like the U.S. Air Force’s future Collaborative Combat Aircraft (CCA) with the help of artificial intelligence-enabled systems. The company states that its testing has also shown a touchscreen tablet-like device is a workable interface for controlling multiple uncrewed aircraft simultaneously from the cockpit of the F-35, as well as the F-22 Raptor. For the U.S. Air Force, how pilots in crewed aircraft will actually manage CCAs during operations has emerged as an increasingly important question.

Details about F-35 and F-22 related crewed-uncrewed teaming developments were included in a press release that Lockheed Martin put out late yesterday, wrapping up various achievements for the company in 2024.

Lockheed Martin

The F-35 “has the capability to control drones, including the U.S. Air Force’s future fleet of Collaborative Combat Aircraft. Recently, Lockheed Martin and industry partners demonstrated end-to-end connectivity including the seamless integration of AI technologies to control a drone in flight utilizing the same hardware and software architectures built for future F-35 flight testing,” the press release states. “These AI-enabled architectures allow Lockheed Martin to not only prove out piloted-drone teaming capabilities, but also incrementally improve them, bringing the U.S. Air Force’s family of systems vision to life.”

“Lockheed Martin has demonstrated its piloted-drone teaming interface, which can control multiple drones from the cockpit of an F-35 or F-22,” the release adds. “This technology allows a pilot to direct multiple drones to engage enemies using a touchscreen tablet in the cockpit of their 5th Gen aircraft.”

A US Air Force image depicting an F-22 Raptor stealth fighter flying together with a Boeing MQ-28 Ghost Bat drone. USAF A US Air Force image depicting an MQ-28 Ghost Bat flying together with an F-22 Raptor stealth fighter. USAF

The press release also highlights prior crewed-uncrewed teaming work that Lockheed Martin’s famed Skunk Works advanced projects division has done with the University of Iowa’s Operator Performance Laboratory (OPL) using surrogate platforms. OPL has also been working with other companies, including Shield AI, as well as the U.S. military, to support advanced autonomy and drone development efforts in recent years.

In November 2024, Lockheed Martin notably announced it had conducted tests with OPL that saw a human controller in an L-39 Albatros jet use a touchscreen interface to order two L-29 Delfin jets, equipped with AI-enabled flight technology acting as surrogate drones, to engage simulated enemy fighters. This sounds very similar to the kind of control architecture the company says it has now demonstrated on the F-35.

A view of the “battle manager” at work in the back seat of the L-39 jet during issuing commands to the L-29s acting as surrogate drones. Lockheed Martin

[…]

The Air Force is also still very much in the process of developing new concepts of operations and tactics, techniques, and procedures for employing CCA drones operationally. How the drones will fit into the service’s force structure and be utilized in routine training and other day-to-day peacetime activities, along with what the maintenance and logistical demands will be, also remains to be seen. Questions about in-flight command and control have emerged as particularly important ones to answer in the near term.

[…]

As Lockheed Martin’s new touting of its work on tablet-based control interfaces highlights, there is a significant debate now just about how pilots will physically issue orders and otherwise manage drones from their cockpits.

A picture of a drone control system using a tablet-like device that General Atomics has previously released. GA-ASI

“There’s a lot of opinions amongst the Air Force about the right way to go [about controlling drones from other aircraft],” John Clark, then head of Skunk Works, also told The War Zone and others at the AFA gathering in September 2024. “The universal thought, though, is that this [a tablet or other touch-based interface] may be the fastest way to begin experimentation. It may not be the end state.”

“We’re working through a spectrum of options that are the minimum invasive opportunities, as well as something that’s more organically equipped, where there’s not even a tablet,” Clark added.

[…]

In addition, there are still many questions about the secure communications architectures that will be needed to support operations involving CCAs and similar drones, as well as for F-35s and F-22s to operate effectively in the airborne controller role. The F-35 could use the popular omnidirectional Link 16 network for this purpose, but doing so would make it easier for opponents to detect the fighter jet and the drone. The F-22, which has long only had the ability to transmit and not receive data via Link 16, faces similar issues.

[…]

Expanding the ability of the F-35, specifically, to serve in the drone controller role has potential ramifications beyond the Air Force’s CCA program. The Air Force and Navy have already been working together on systems that will allow for the seamless exchange of control of CCAs and other drones belonging to either service during future operations. The U.S. Marine Corps, which is pursuing its own loyal wingman-type drones currently through experimentation with Kratos XQ-58 Valkyries, also has formal ties to the Air Force’s CCA program. All three services fly variants of the Joint Strike Fighter.

It’s also worth noting here that the U.S. military has been publicly demonstrating the ability of tactical jets to actively control drones in mid-air for nearly a decade now, at least. In 2015, a U.S. Marine Corps AV-8B Harrier jump jet flew notably together with a Kratos Unmanned Tactical Aerial Platform-22 (UTAP-22) drone in testing that included “command and control through the tactical data link.” Other experimentation is known to have occurred across the U.S. military since then, and this doesn’t account for additional work in the classified domain.

[…]

Source: F-35 AI-Enabled Drone Controller Capability Successfully Demonstrated

The EU’s AI Act – a very quick primer on what and why

Have you ever been in a group project where one person decided to take a shortcut, and suddenly, everyone ended up under stricter rules? That’s essentially what the EU is saying to tech companies with the AI Act: “Because some of you couldn’t resist being creepy, we now have to regulate everything.” This legislation isn’t just a slap on the wrist—it’s a line in the sand for the future of ethical AI.

Here’s what went wrong, what the EU is doing about it, and how businesses can adapt without losing their edge.

When AI Went Too Far: The Stories We’d Like to Forget

Target and the Teen Pregnancy Reveal

One of the most infamous examples of AI gone wrong happened back in 2012, when Target used predictive analytics to market to pregnant customers. By analyzing shopping habits—think unscented lotion and prenatal vitamins—they managed to identify a teenage girl as pregnant before she told her family. Imagine her father’s reaction when baby coupons started arriving in the mail. It wasn’t just invasive; it was a wake-up call about how much data we hand over without realizing it. (Read more)

Clearview AI and the Privacy Problem

On the law enforcement front, tools like Clearview AI created a massive facial recognition database by scraping billions of images from the internet. Police departments used it to identify suspects, but it didn’t take long for privacy advocates to cry foul. People discovered their faces were part of this database without consent, and lawsuits followed. This wasn’t just a misstep—it was a full-blown controversy about surveillance overreach. (Learn more)

The EU’s AI Act: Laying Down the Law

The EU has had enough of these oversteps. Enter the AI Act: the first major legislation of its kind, categorizing AI systems into four risk levels:

  1. Minimal Risk: Chatbots that recommend books—low stakes, little oversight.
  2. Limited Risk: Systems like AI-powered spam filters, requiring transparency but little more.
  3. High Risk: This is where things get serious—AI used in hiring, law enforcement, or medical devices. These systems must meet stringent requirements for transparency, human oversight, and fairness.
  4. Unacceptable Risk: Think dystopian sci-fi—social scoring systems or manipulative algorithms that exploit vulnerabilities. These are outright banned.

For companies operating high-risk AI, the EU demands a new level of accountability. That means documenting how systems work, ensuring explainability, and submitting to audits. If you don’t comply, the fines are enormous—up to €35 million or 7% of global annual revenue, whichever is higher.

Why This Matters (and Why It’s Complicated)

The Act is about more than just fines. It’s the EU saying, “We want AI, but we want it to be trustworthy.” At its heart, this is a “don’t be evil” moment, but achieving that balance is tricky.

On one hand, the rules make sense. Who wouldn’t want guardrails around AI systems making decisions about hiring or healthcare? But on the other hand, compliance is costly, especially for smaller companies. Without careful implementation, these regulations could unintentionally stifle innovation, leaving only the big players standing.

Innovating Without Breaking the Rules

For companies, the EU’s AI Act is both a challenge and an opportunity. Yes, it’s more work, but leaning into these regulations now could position your business as a leader in ethical AI. Here’s how:

  • Audit Your AI Systems: Start with a clear inventory. Which of your systems fall into the EU’s risk categories? If you don’t know, it’s time for a third-party assessment.
  • Build Transparency Into Your Processes: Treat documentation and explainability as non-negotiables. Think of it as labeling every ingredient in your product—customers and regulators will thank you.
  • Engage Early With Regulators: The rules aren’t static, and you have a voice. Collaborate with policymakers to shape guidelines that balance innovation and ethics.
  • Invest in Ethics by Design: Make ethical considerations part of your development process from day one. Partner with ethicists and diverse stakeholders to identify potential issues early.
  • Stay Dynamic: AI evolves fast, and so do regulations. Build flexibility into your systems so you can adapt without overhauling everything.

The Bottom Line

The EU’s AI Act isn’t about stifling progress; it’s about creating a framework for responsible innovation. It’s a reaction to the bad actors who’ve made AI feel invasive rather than empowering. By stepping up now—auditing systems, prioritizing transparency, and engaging with regulators—companies can turn this challenge into a competitive advantage.

The message from the EU is clear: if you want a seat at the table, you need to bring something trustworthy. This isn’t about “nice-to-have” compliance; it’s about building a future where AI works for people, not at their expense.

And if we do it right this time? Maybe we really can have nice things.

Source: The EU’s AI Act – Gigaom

Inheritance, “cronyism and corruption” or monopoly power grows billionaire wealth in 2024 in second-largest annual increase since records began

The wealth of the world’s billionaires grew by $2tn (£1.64tn) last year, three times faster than in 2023, amounting to $5.7bn (£4.7bn) a day, according to a report by Oxfam.

The latest inequality report from the charity reveals that the world is now on track to have five trillionaires within a decade, a change from last year’s forecast of one trillionaire within 10 years.

[…]

At the same time, the number of people living under the World Bank poverty line of $6.85 a day has barely changed since 1990, and is close to 3.6 billion – equivalent to 44% of the world’s population today, the charity said. One in 10 women lives in extreme poverty (below $2.15 a day), which means 24.3 million more women than men endure extreme poverty.

Oxfam warned that progress on reducing poverty has ground to a halt and that extreme poverty could be ended three times faster if inequality were to be reduced.

[…]

Rising share values on global stock exchanges account for most of the increase in billionaire wealth, though higher property values also played a role. Residential property accounts for about 80% of worldwide investments.

Globally, the number of billionaires rose by 204 last year to 2,769. Their combined wealth jumped from $13tn to $15tn in just 12 months – the second-largest annual increase since records began. The wealth of the world’s 10 richest men grew on average by almost $100m a day and even if they lost 99% of their wealth overnight, they would remain billionaires.

[…]

The report argues that most of the wealth is taken, not earned, as 60% comes from either inheritance, “cronyism and corruption” or monopoly power. It calculates that 18% of the wealth arises from monopoly power.

[…]

Anna Marriott, Oxfam’s inequality policy lead, said: “Last year we predicted the first trillionaire could emerge within a decade, but this shocking acceleration of wealth means that the world is now on course for at least five. The global economic system is broken, wholly unfit for purpose as it enables and perpetuates this explosion of riches, while nearly half of humanity continues to live in poverty.”

She called on the UK government to prioritise economic policies that bring down inequality, including higher taxation of the super-rich.

[…]

Source: Wealth of world’s billionaires grew by $2tn in 2024, report finds | The super-rich | The Guardian

Bluesky 2024 Moderation Report shows 17x more user content reports with 10x user growth fed by Brazilians serial complainers

[…] In 2024, Bluesky grew from 2.89M users to 25.94M users. In addition to users hosted on Bluesky’s infrastructure, there are over 4,000 users running their own infrastructure (Personal Data Servers), self-hosting their content, posts, and data.

To meet the demands caused by user growth, we’ve increased our moderation team to roughly 100 moderators and continue to hire more staff. Some moderators specialize in particular policy areas, such as dedicated agents for child safety.

[…]

In 2024, users submitted 6.48M reports to Bluesky’s moderation service. That’s a 17x increase from the previous year — in 2023, users submitted 358K reports total. The volume of user reports increased with user growth and was non-linear, as the graph of report volume below shows:

2024 report volumeReport volume in 2024
In late August, there was a large increase in user growth for Bluesky from Brazil, and we saw spikes of up to 50k reports per day. Prior to this, our moderation team handled most reports within 40 minutes. For the first time in 2024, we now had a backlog in moderation reports. To address this, we increased the size of our Portuguese-language moderation team, added constant moderation sweeps and automated tooling for high-risk areas such as child safety, and hired moderators through an external contracting vendor for the first time.

We already had automated spam detection in place, and after this wave of growth in Brazil, we began investing in automating more categories of reports so that our moderation team would be able to review suspicious or problematic content rapidly. In December, we were able to review our first wave of automated reports for content categories like impersonation. This dropped processing time for high-certainty accounts to within seconds of receiving a report, though it also caused some false positives. We’re now exploring the expansion of this tooling to other policy areas. Even while instituting automation tooling to reduce our response time, human moderators are still kept in the loop — all appeals and false positives are reviewed by human moderators.

Some more statistics: The proportion of users submitting reports held fairly stable from 2023 to 2024. In 2023, 5.6% of our active users1 created one or more reports. In 2024, 1.19M users made one or more reports, approximately 4.57% of our user base.

In 2023, 3.4% of our active users received one or more reports. In 2024, the number of users who received a report were 770K, comprising 2.97% of our user base.

The majority of reports were of individual posts, with a total of 3.5M reports. This was followed by account profiles at 47K reports, typically for a violative profile picture or banner photo. Lists received 45K reports. DMs received 17.7K reports. Significantly lower are feeds at 5.3K reports, and starter packs with 1.9K reports.

Our users report content for a variety of reasons, and these reports help guide our focus areas. Below is a summary of the reports we received, categorized by the reasons users selected. The categories vary slightly depending on whether a report is about an account or a specific post, but here’s the full breakdown:

  • Anti-social Behavior: Reports of harassment, trolling, or intolerance – 1.75M
  • Misleading Content: Includes impersonation, misinformation, or false claims about identity or affiliations – 1.20M
  • Spam: Excessive mentions, replies, or repetitive content – 1.40M
  • Unwanted Sexual Content: Nudity or adult content not properly labeled – 630K
  • Illegal or Urgent Issues: Clear violations of the law or our terms of service – 933K
  • Other: Issues that don’t fit into the above categories – 726K

[…]

The top human-applied labels were:

  • Sexual-figurative3 – 55,422
  • Rude – 22,412
  • Spam – 13,201
  • Intolerant – 11,341
  • Threat – 3,046

Appeals

In 2024, 93,076 users submitted at least one appeal in the app, for a total of 205K individual appeals. For most cases, the appeal was due to disagreement with label verdicts.

[…]

Legal Requests

In 2024, we received 238 requests from law enforcement, governments, legal firms, responded to 182, and complied with 146. The majority of requests came from German, U.S., Brazilian, and Japanese law enforcement.

[…]

Copyright / Trademark

In 2024, we received a total of 937 copyright and trademark cases. There were four confirmed copyright cases in the entire first half of 2024, and this number increased to 160 in September. The vast majority of cases occurred between September to December.

[…]

Source: Bluesky 2024 Moderation Report – Bluesky

The following lines are especially interesting: Brazilians seem to be the type of people who really enjoy reporting on people and not only that, they like to assault or brigade specific users.

In late August, there was a large increase in user growth for Bluesky from Brazil, and we saw spikes of up to 50k reports per day.

In 2023, 5.6% of our active users1 created one or more reports. In 2024, 1.19M users made one or more reports, approximately 4.57% of our user base.

In 2023, 3.4% of our active users received one or more reports. In 2024, the number of users who received a report were 770K, comprising 2.97% of our user base.

ChatGPT crawler flaw opens door to DDoS, prompt injection

In a write-up shared this month via Microsoft’s GitHub, Benjamin Flesch, a security researcher in Germany, explains how a single HTTP request to the ChatGPT API can be used to flood a targeted website with network requests from the ChatGPT crawler, specifically ChatGPT-User.

This flood of connections may or may not be enough to knock over any given site, practically speaking, though it’s still arguably a danger and a bit of an oversight by OpenAI. It can be used to amplify a single API request into 20 to 5,000 or more requests to a chosen victim’s website, every second, over and over again.

“ChatGPT API exhibits a severe quality defect when handling HTTP POST requests to https://chatgpt.com/backend-api/attributions,” Flesch explains in his advisory, referring to an API endpoint called by OpenAI’s ChatGPT to return information about web sources cited in the chatbot’s output. When ChatGPT mentions specific websites, it will call attributions with a list of URLs to those sites for its crawler to go access and fetch information about.

If you throw a big long list of URLs at the API, each slightly different but all pointing to the same site, the crawler will go off and hit every one of them at once.

[…]

Thus, using a tool like Curl, an attacker can send an HTTP POST request – without any need for an authentication token – to that ChatGPT endpoint and OpenAI’s servers in Microsoft Azure will respond by initiating an HTTP request for each hyperlink submitted via the urls[] parameter in the request. When those requests are directed to the same website, they can potentially overwhelm the target, causing DDoS symptoms – the crawler, proxied by Cloudflare, will visit the targeted site from a different IP address each time.

[…]

“I’d say the bigger story is that this API was also vulnerable to prompt injection,” he said, in reference to a separate vulnerability disclosure. “Why would they have prompt injection for such a simple task? I think it might be because they’re dogfooding their autonomous ‘AI agent’ thing.”

That second issue can be exploited to make the crawler answer queries via the same attributions API endpoint; you can feed questions to the bot, and it can answer them, when it’s really not supposed to do that; it’s supposed to just fetch websites.

Flesch questioned why OpenAI’s bot hasn’t implemented simple, established methods to properly deduplicate URLs in a requested list or to limit the size of the list, nor managed to avoid prompt injection vulnerabilities that have been addressed in the main ChatGPT interface.

[…]

Source: ChatGPT crawler flaw opens door to DDoS, prompt injection • The Register

Robotic exoskeleton can train expert pianists to play faster

A robotic hand exoskeleton can help expert pianists learn to play even faster by moving their fingers for them.

Robotic exoskeletons have long been used to rehabilitate people who can no longer use their hands due to an injury or medical condition, but using them to improve the abilities of able-bodied people has been less well explored.

Now, Shinichi Furuya at Sony Computer Science Laboratories in Tokyo and his colleagues have found that a robotic exoskeleton can improve the finger speed of trained pianists after a single 30-minute training session.

[…]

The robotic exoskeleton can raise and lower each finger individually, up to four times a second, using a separate motor attached to the base of each finger.

To test the device, the researchers recruited 118 expert pianists who had all played since before they had turned 8 years old and for at least 10,000 hours, and asked them to practise a piece for two weeks until they couldn’t improve.

Then, the pianists received a 30-minute training session with the exoskeleton, which moved the fingers of their right hand in different combinations of simple and complex patterns, either slowly or quickly, so that Furuya and his colleagues could pinpoint what movement type caused improvement.

The pianists who experienced the fast and complex training could better coordinate their right hand movements and move the fingers of either hand faster, both immediately after training and a day later. This, together with evidence from brain scans, indicates that the training changed the pianists’ sensory cortices to better control finger movements in general, says Furuya.

“This is the first time I’ve seen somebody use [robotic exoskeletons] to go beyond normal capabilities of dexterity, to push your learning past what you could do naturally,” says Nathan Lepora at the University of Bristol, UK. “It’s a bit counterintuitive why it worked, because you would have thought that actually performing the movements yourself voluntarily would be the way to learn, but it seems passive movements do work.”

 

Journal reference:

Science Robotics DOI: 10.1126/scirobotics.adn3802

Source: Robotic exoskeleton can train expert pianists to play faster | New Scientist

As Zuckerberg Goes Around Whining About Biden, He Made Sure To First Get His New Approach Approved By Trump

Remember how Zuckerberg was “done with politics”? Remember how he promised that he was going to stop doing what politicians demanded he do?

Now it turns out that he not only did his big set of moderation changes to please Trump, but did so only after he was told by the incoming administration to act. Even worse, he reportedly made sure to share his plans with top Trump aides to get their approval first.

That’s a key takeaway from a new New York Times piece that is ostensibly a profile of the relentlessly awful Stephen Miller. However, it also has a few revealing details about the whole Zuckerberg saga buried within. First, Miller reportedly demanded that Zuckerberg make changes at Facebook “on Trump’s terms.”

Mr. Miller told Mr. Zuckerberg that he had an opportunity to help reform America, but it would be on President-elect Donald J. Trump’s terms. He made clear that Mr. Trump would crack down on immigration and go to war against the diversity, equity and inclusion, or D.E.I., culture that had been embraced by Meta and much of corporate America in recent years.

Mr. Zuckerberg was amenable. He signaled to Mr. Miller and his colleagues, including other senior Trump advisers, that he would do nothing to obstruct the Trump agenda, according to three people with knowledge of the meeting, who asked for anonymity to discuss a private conversation. Mr. Zuckerberg said he would instead focus solely on building tech products.

Even if you argue that this was more about DEI programs at Meta rather than about content moderation, it’s still the incoming administration reportedly making actual demands of Zuckerberg, and Zuckerberg not just saying “fine” but actually previewing the details to Miller to make sure they got Trump’s blessing.

Earlier this month, Mr. Zuckerberg’s political lieutenants previewed the changes to Mr. Miller in a private briefing. And on Jan. 10, Mr. Zuckerberg made them official….

This is especially galling given that it was just days ago when Zuckerberg was whining about how unfair it was that Biden officials were demanding stuff from him (even though he had no trouble saying no to them) and it was big news! The headlines made a huge deal of how unfair Biden was to Zuckerberg. Here’s just a sampling.

Image

Notably absent from this breathless coverage was any mention that Trump was the one who actually threatened to imprison Zuckerberg for life. Or that his incoming FCC chair threatened to remove Section 230 if Meta didn’t stop fact-checking.

Also conveniently omitted was the fact that the Supreme Court found no evidence of the Biden administration going over the line in its conversations with Meta. Indeed, a Supreme Court Justice noted that conversations like those that the Biden admin had with Meta happened “thousands of times a day,” and weren’t problematic because there was no inherent threat or direct coordination.

Yet, here, we have reports of both threats and now evidence of direct coordination, including Zuckerberg asking for and getting direct approval from a top Trump official before rolling out the policy.

And where is this bombshell revelation? It’s buried in a random profile piece puffing up Stephen Miller.

It’s almost as if everyone now takes it for granted that any made-up story about Biden will be treated as fact, and everyone just takes it as expected when Trump actually does the thing that Biden gets falsely accused of.

With this new story, don’t hold your breath waiting for the same outlets to give this anywhere near the same level of coverage and outrage they directed at the Biden administration.

It’s almost as if there’s a massive double standard here: everything is okay if Trump does it, but we can blame the Biden admin for things we only pretend they did.

[…]

Source: As Zuckerberg Goes Around Whining About Biden, He Made Sure To First Get His New Approach Approved By Trump | Techdirt

The US press walks in lockstep with the Trump Fascist movement.

NY Post: Fact Checking Is Now Censorship

This was inevitable, ever since Donald Trump and the MAGA world freaked out when social media’s attempts to fact-check the President were deemed “censorship.” The reaction was both swift and entirely predictable. After all, how dare anyone question Dear Leader’s proclamations, even if they are demonstrably false? It wasn’t long before we started to see opinion pieces from MAGA folks breathlessly declaring that “fact-checking private speech is outrageous.” There were even politicians proposing laws to ban fact-checking.

In their view, the best way to protect free speech is apparently (?!?) to outlaw speech you don’t like.

This trend has only accelerated in recent years. Last year, Congress got in on the game, arguing that fact-checking is a form of censorship that needs to be investigated. Not to be outdone, incoming FCC chair Brendan Carr has made the same argument.

With last week’s announcement by Mark Zuckerberg that Meta was ending its fact-checking program, the anti-fact-checking rhetoric hasn’t slowed down one bit.

The NY Post now has an article with the hilarious headline: “The incredible, blind arrogance of the ‘fact-checking’ censors.”

So let’s be clear here: fact-checking is speech. Fact-checking is not censorship. It is protected by the First Amendment. Indeed, in olden times, when free speech supporters would talk about the “marketplace of ideas” and the “best response to bad speech is more speech,” they meant things like fact-checking. They meant that if someone were blathering on about utter nonsense, then a regime that enabled more speech could come along and fact-check folks.

There is no “censorship” involved in fact-checking. There is only a question of how others respond to the fact checks.

[…]

There’s a really fun game that the Post Editorial Board is playing here, pretending that they’re just fine with fact-checking, unless it leads to “silencing.”

The real issue, that is, isn’t the checking, it’s the silencing.

But what “silencing” ever actually happened due to fact-checking? And when was it caused by the government (which would be necessary for it to violate the First Amendment)? The answer is none.

The piece whines about a few NY Post articles that had limited reach on Facebook, but that’s Facebook’s own free speech as well, not censorship.

[…]

The Post goes on with this fun set of words:

Yes, the internet is packed with lies, misrepresentations and half-truths: So is all human conversation.

The only practical answer to false speech is and always been true speech; it doesn’t stop the liars or protect all the suckers, but most people figure it out well enough.

Shutting down debate in the name of “countering disinformation” only serves the liars with power or prestige or at least the right connections.

First off, the standard saying is that the response to false speech should be “more speech” not necessarily “true speech” but more to the point, uh, how do you get that “true speech”? Isn’t it… fact checking? And, if, as the NY Post suggests, the problem here is false speech in the fact checks, then shouldn’t the response be more speech in response rather than silencing the fact checkers?

I mean, their own argument isn’t even internally consistent.

They’re literally saying that we need more “truthful speech” and less “silencing of speech” while cheering on the silencing of organizations who try to provide more truthful speech.

[…]

Source: NY Post: Fact Checking Is Now Censorship | Techdirt

Hello Fascism in the 4th Reich!

A new optical memory platform for super fast calculations

[…] photonics, which offers lower energy consumption and reduced latency than electronics.

One of the most promising approaches is in-memory computing, which requires the use of photonic memories. Passing light signals through these memories makes it possible to perform operations nearly instantaneously. But solutions proposed for creating such memories have faced challenges such as low switching speeds and limited programmability.

Now, an international team of researchers has developed a groundbreaking photonic platform to overcome those limitations. Their findings were published in the journal Nature Photonics.

[…]

The researchers used a magneto-optical material, cerium-substituted yttrium iron garnet (YIG), the optical properties of which dynamically change in response to external magnetic fields. By employing tiny magnets to store data and control the propagation of light within the material, they pioneered a new class of magneto-optical memories. The innovative platform leverages light to perform calculations at significantly higher speeds and with much greater efficiency than can be achieved using traditional electronics.

This new type of memory has switching speeds 100 times faster than those of state-of-the-art photonic integrated technology. They consume about one-tenth the power, and they can be reprogrammed multiple times to perform different tasks. While current state-of-the-art optical memories have a limited lifespan and can be written up to 1,000 times, the team demonstrated that magneto-optical memories can be rewritten more than 2.3 billion times, equating to a potentially unlimited lifespan.

[…]

Source: A new optical memory platform for super fast calculations | ScienceDaily

Why Has Zuckerberg stopped Meta Fact Checking? Trump lifetime prison threats and FCC section 230 removal threats?

If you only remember two things about the government pressure campaign to influence Mark Zuckerberg’s content moderation decisions, make it these: Donald Trump directly threatened to throw Zuck in prison for the rest of his life, and just a couple months ago FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.

Two months later — what do you know? — Zuckerberg ended all fact-checking on Meta. But when he went on Joe Rogan, rather than blaming those actual obvious threats, he instead blamed the Biden administration, because some admin officials sent angry emails… which Zuck repeatedly admits had zero impact on Meta’s actual policies.

[…]

this is a more simplified version of what happened, which can be summarized as: the actual threats came from the GOP, to which Zuckerberg quickly caved. The supposed threats from the Biden admin were overhyped, exaggerated, and misrepresented, and Zuck directly admits he was able to easily refuse those requests.

All the rest is noise.

[Here follows a long detailed unpacking of the Rogan interview]

as mentioned in my opening, Donald Trump directly threatened to throw Zuck in prison for the rest of his life if Facebook didn’t moderate the way he wanted. And just a couple months ago, FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.

None of that came up in this discussion. The only “government pressure” that Zuck talks about is from the Biden admin with “cursing,” which he readily admits they weren’t intimidated by.

So we have Biden officials who were, perhaps, mean, but not so threatening that Meta felt the need to bow down to them. And then we have Trump himself and leading members of his incoming administration who sent direct and obvious threats, which Zuck almost immediately bowed down to and caved.

And yet Rogan (and much of the media covering this podcast) claims he “revealed” how the Biden admin violated the First Amendment. Hell, the NY Post even ran an editorial pretending that Zuck didn’t go far enough because he didn’t reveal all of this in time for the Murthy case. And that’s only because the author doesn’t realize he literally is talking about the documents in the Murthy case.

The real story here is that Zuckerberg caved to Trump’s threats and felt fine pushing back on the Biden admin. Rogan at one point rants about how Trump will now protect Zuck because Trump “uniquely has felt the impact of not being able to have free speech.” That seems particularly ironic given the real story: Zuckerberg caved to Trump’s threats while pushing back on the Biden admin.

[…]

Strip away all the spin and misdirection, and the truth is inescapable: Zuckerberg folded like a cheap suit in the face of direct threats from Trump and his lackeys, while barely batting an eye at some sternly worded emails from Biden officials.

[…]

Source: Rogan Misses The Mark: How Zuck’s Misdirection On Gov’t Pressure Goes Unchallenged | Techdirt

Google won’t add fact-checks despite new EU law

Google has told the EU it will not add fact checks to search results and YouTube videos or use them in ranking or removing content, despite the requirements of a new EU law, according to a copy of a letter obtained by Axios.

The big picture: Google has never included fact-checking as part of its content moderation practices. The company had signaled privately to EU lawmakers that it didn’t plan to change its practices, but it’s reaffirming its stance ahead of a voluntary code becoming law in the near future.

Zoom in: In a letter written to Renate Nikolay, the deputy director general under the content and technology arm at the European Commission, Google’s global affairs president Kent Walker said the fact-checking integration required by the Commission’s new Disinformation Code of Practice “simply isn’t appropriate or effective for our services” and said Google won’t commit to it.

  • The code would require Google to incorporate fact-check results alongside Google’s search results and YouTube videos. It would also force Google to build fact-checking into its ranking systems and algorithms.
  • Walker said Google’s current approach to content moderation works and pointed to successful content moderation during last year’s “unprecedented cycle of global elections” as proof.
  • He said a new feature added to YouTube last year that enables some users to add contextual notes to videos “has significant potential.” (That program is similar to X’s Community Notes feature, as well as new program announced by Meta last week.)

Catch up quick: The EU’s Code of Practice on Disinformation, introduced in 2022, includes several voluntary commitments that tech firms and private companies, including fact-checking organizations, are expected to deliver on.

  • The Code, originally created in 2018, predates the EU’s new content moderation law, the Digital Services Act (DSA), which went into effect in 2022.

State of play: The Commission has held private discussions over the past year with tech companies, urging them to convert the voluntary measures into an official code of conduct under the DSA.

  • Walker said in his letter Thursday that Google had already told the Commission that it didn’t plan to comply.
  • Google will “pull out of all fact-checking commitments in the Code before it becomes a DSA Code of Conduct,” he wrote.
  • He said Google will continue to invest in improvements to its current content moderation practices, which focus on providing people with more information about their search results through features like Synth ID watermarking and AI disclosures on YouTube.

Zoom out: The news comes amid a global reckoning about the role tech platforms should play in fact-checking and policing speech.

Source: Google won’t add fact-checks despite new EU law