Judge: Apple Lied In Fortnite Case, chose to not comply with court order, must immediately allow external payments without a cut

Epic Games v. Apple judge Yvonne Gonzalez Rogers has ruled that, effective immediately, Apple can no longer take a cut from purchases made outside apps and has blocked the tech giant from restricting how developers can point people to third-party payment options. The judge was also not happy that Apple has seemingly not complied with a previous court order and has referred the case to the U.S. Attorney’s Office for possible contempt charges. Apple is already planning to appeal the ruling.

This is the latest development in the Epic v Apple court case that started back in 2020 after Epic added its own payment option to Fortnite on iOS and Apple pulled the game as a result. The Fortnite maker’s case against Apple was focused primarily on the large fees the tech giant took from all in-app purchases and its strict restrictions against allowing other app stores and third-party options on iOS devices.

In 2021 the judge sided with Apple on most points, but declared the company needed to allow app makers to use third-party payment systems that could avoid Apple’s cut. In 2023, after a series of appeals, Apple declared a “resounding victory” over Epic, though it was still forced by the court to allow third-party payment options and to not take a cut of outside app purchases. Epic alleges that Apple never complied with that order. Now Apple finds itself in a lot of trouble with judge Yvonne Gonzalez Rogers.

“That [Apple] thought this Court would tolerate such insubordination was a gross miscalculation,” wrote the judge in a ruling filed on April 30 in California. “Apple willfully chose not to comply with this Court’s Injunction. It did so with the express intent to create new anticompetitive barriers which would, by design and in effect, maintain a valued revenue stream; a revenue stream previously found to be anticompetitive.”

Elsewhere in the filing, the judge says that an Apple executive lied under oath when talking about forcing devs to pay a 27 percent fee for outside app purchases and wrote that Apple CEO Tim Cook “chose poorly” when listening to execs at the company who convinced him to ignore the injunction.

“Vice-President of Finance, Alex Roman, outright lied under oath. Internally, Phillip Schiller had advocated that Apple comply with the Injunction, but Tim Cook ignored Schiller and instead allowed Chief Financial Officer Luca Maestri and his finance team to convince him otherwise. Cook chose poorly,” wrote the judge. In the filing the judge also suggested that Apple’s actions might constitute contempt charges and has referred the case to the U.S. Attorney’s office.

As explained in the filing, Apple must now “immediately” comply with the court’s orders to allow developers to include third-party payment options, to not take a cut of those purchases, and to not block or hinder devs from including these outside payment methods through various means and UI messages.

[…]

Source: Judge: Apple Lied In Fortnite Case And Just Blew App Store Open

Huge molecular cloud 10x the size of moon detected right next to earth

 A longstanding prediction in interstellar theory posits that significant quantities of molecular gas, crucial for star formation, may be undetected due to being ’dark’ in commonly used molecular gas tracers, such as carbon monoxide. We report the discovery of Eos, a dark molecular cloud located just 94 pc from the Sun.This cloud is identified using H2 far-ultraviolet fluorescent line emission, which traces molecular gas at the boundary layers of star-forming and supernova remnant regions. The cloud edge is outlined along the high-latitude side of the North Polar Spur

[…]

 

Source: A nearby dark molecular cloud in the Local Bubble revealed via H2 fluorescence | Nature Astronomy

Messaging App Used by Mike Waltz, Trump Deportation Airline GlobalX Both Hacked in Separate Breaches

TeleMessage, a communications app used by former Trump national security adviser Mike Waltz, has suspended services after a reported hack exposed some user messages. The breach follows controversy over Waltz’s use of the app to coordinate military updates, including accidentally adding a journalist to a sensitive Signal group chat. From the report: In an email, Portland, Oregon-based Smarsh, which runs the TeleMessage app, said it was “investigating a potential security incident” and was suspending all its services “out of an abundance of caution.” A Reuters photograph showed Waltz using TeleMessage, an unofficial version of the popular encrypted messaging app Signal, on his phone during a cabinet meeting on Wednesday. A separate report from 404 Media says hackers have also targeted GlobalX Air — one of the main airlines the Trump administration is using as part of its deportation efforts — and claim to have stolen flight records and passenger manifests for all its flights, including those for deportation. From the report: The data, which the hackers contacted 404 Media and other journalists about unprompted, could provide granular insight into who exactly has been deported on GlobalX flights, when, and to where, with GlobalX being the charter company that facilitated the deportation of hundreds of Venezuelans to El Salvador. “Anonymous has decided to enforce the Judge’s order since you and your sycophant staff ignore lawful orders that go against your fascist plans,” a defacement message posted to GlobalX’s website reads. Anonymous, well-known for its use of the Guy Fawkes mask, is an umbrella some hackers operate under when performing what they see as hacktivism.

Source: Messaging App Used by Mike Waltz, Trump Deportation Airline GlobalX Both Hacked in Separate Breaches | Slashdot

Dating app Raw exposed users’ location data and personal information

A security lapse at dating app Raw publicly exposed the personal data and private location data of its users, TechCrunch has found.

The exposed data included users’ display names, dates of birth, dating and sexual preferences associated with the Raw app, as well as users’ locations. Some of the location data included coordinates that were specific enough to locate Raw app users with street-level accuracy.

Raw, which launched in 2023, is a dating app that claims to offer more genuine interactions with others in part by asking users to upload daily selfie photos. The company does not disclose how many users it has, but its app listing on the Google Play Store notes more than 500,000 Android downloads to date.

News of the security lapse comes in the same week that the startup announced a hardware extension of its dating app, the Raw Ring, an unreleased wearable device that it claims will allow app users to track their partner’s heart rate and other sensor data to receive AI-generated insights, ostensibly to detect infidelity.

Notwithstanding the moral and ethical issues of tracking romantic partners and the risks of emotional surveillance, Raw claims on its website and in its privacy policy that its app, and its unreleased device, both use end-to-end encryption, a security feature that prevents anyone other than the user — including the company — from accessing the data.

When we tried the app this week, which included an analysis of the app’s network traffic, TechCrunch found no evidence that the app uses end-to-end encryption. Instead, we found that the app was publicly spilling data about its users to anyone with a web browser.

[…]

Source: Dating app Raw exposed users’ location data and personal information | TechCrunch

The UN Ditches Google for Form Submissions, Opts for Open Source ‘CryptPad’ Instead

Did you know there’s an initiative to drive Open Source adoption both within the United Nations — and globally? Launched in March, it’s the work of the Digital Technology Network (under the UN’s chief executive board) which “works to advance open source technologies throughout UN agencies,” promoting “collaboration and scalable solutions to support the UN’s digital transformation.” Fun fact: The first group to endorse the initiative’s principles was the Open Source Initiative

“The Open Source Initiative applauds the United Nations for recognizing the growing importance of Open Source in solving global challenges and building sustainable solutions, and we are honored to be the first to endorse the UN Open Source Principles,” said Stefano Maffulli, executive director of OSI.
But that’s just the beginining, writes It’s FOSS News: As part of the UN Open Source Principles initiative, the UN has invited other organizations to support and officially endorse these principles. To collect responses, they are using CryptPad instead of Google Forms… If you don’t know about CryptPad, it is a privacy-focused, open source online collaboration office suite that encrypts all of its content, doesn’t log IP addresses, and supports a wide range of collaborative documents and tools for people to use.

While this happened back in late March, we thought it would be a good idea to let people know that a well-known global governing body like the UN was slowly moving towards integrating open source tech into their organization… I sincerely hope the UN continues its push away from proprietary Big Tech solutions in favor of more open, privacy-respecting alternatives, integrating more of their workflow with such tools.

16 groups have already endorsed the UN Open Source Principles (including the GNOME Foundation, the Linux Foundation, and the Eclipse Foundation).

Here’s the eight UN Open Source Principles:

  1. Open by default: Making Open Source the standard approach for projects
  2. Contribute back: Encouraging active participation in the Open Source ecosystem
  3. Secure by design: Making security a priority in all software projects
  4. Foster inclusive participation and community building: Enabling and facilitating diverse and inclusive contributions
  5. Design for reusability: Designing projects to be interoperable across various platforms and ecosystems
  6. Provide documentation: Providing thorough documentation for end-users, integrators and developers
  7. RISE (recognize, incentivize, support and empower): Empowering individuals and communities to actively participate
  8. Sustain and scale: Supporting the development of solutions that meet the evolving needs of the UN system and beyond.

Source: The UN Ditches Google for Form Submissions, Opts for Open Source ‘CryptPad’ Instead

Army Will Seek Right to Repair Clauses in All Its Contracts

A new memo from Secretary of Defense Pete Hegseth is calling on defense contractors to grant the Army the right-to-repair. The Wednesday memo is a document about “Army Transformation and Acquisition Reform” that is largely vague but highlights the very real problems with IP constraints that have made it harder for the military to repair damaged equipment.

Hegseth made this clear at the bottom of the memo in a subsection about reform and budget optimization. “The Secretary of the Army shall…identify and propose contract modifications for right to repair provisions where intellectual property constraints limit the Army’s ability to conduct maintenance and access the appropriate maintenance tools, software, and technical data—while preserving the intellectual capital of American industry,” it says. “Seek to include right to repair provisions in all existing contracts and also ensure these provisions are included in all new contracts.”

[…]

appliance manufacturers and tractor companies have lobbied against bills that would make it easier for the military to repair its equipment.

This has been a huge problem for decades. In the 1990s, the Air Force bought Northrop Grumman’s B-2 Stealth Bombers for about $2 billion each. When the Air Force signed the contract for the machines, it paid $2.6 billion up front just for spare parts. Now, for some reason, Northrop Grumman isn’t able to supply replacement parts anymore. To fix the aging bombers, the military has had to reverse engineer parts and do repairs themselves.

Similarly, Boeing screwed over the DoD on replacement parts for the C-17 military transport aircraft to the tune of at least $1 million. The most egregious example was a common soap dispenser. “One of the 12 spare parts included a lavatory soap dispenser where the Air Force paid more than 80 times the commercially available cost or a 7,943 percent markup,” a Pentagon investigation found. Imagine if they’d just used a 3D printer to churn out the part it needed.

[…]

Source: Army Will Seek Right to Repair Clauses in All Its Contracts

The Not-Pebble Core Devices Watch is something different in that area

[…]The regular slate of smartwatches from the likes of Google, Samsung, and Apple hasn’t left us truly excited. Samsung and Apple are in a race to add as many health sensors to the back of their devices, and still the most we can hope for on the impending Apple Watch Series 11 is a slimmer body and slightly better display. Core Devices’ Core 2 Duo, in its current iteration, is practically the same device as a Pebble 2 but with a few tweaks. It includes a relatively small, 1.2-inch display and—get this—no touchscreen. You control it with buttons. The Core 2 Duo screen is black and white, while the $225 Core Time 2 has a 1.5-inch color touchscreen display and a heart rate monitor.

 

In a video posted Thursday, Migicovsky offered some insight into the smartwatch itself, plus more on what people in the U.S. can expect to pay for one due to Trump tariffs. He confirmed the Core 2 Duo is being made in China

[…]

There are upgrades on the way. Migicovsky said he hopes to integrate complications—aka those little widgets that tell you the time or offer app alerts— alongside deeper Beeper integration for having an all-in-one chat app. The Pebble founder said he would also like to add some sort of AI companion onto the smartwatch. He cited the app Bob.ai, which can offer quick answers to simple queries through Google’s Gemini AI model. The maker has already mentioned users could connect with ChatGPT via a built-in microphone, but the new smartwatches will have a speaker for ChatGPT to talk back.

The Core 2 Duo is supposed to retail for $150

[…]

Source: The Not-Pebble Watch Is a Sign We Crave Something Unique

I hope it works and I hope they have an e-ink display with a huge battery life.

Iconic Gun-Makers Gave Sensitive Customer Information to Political Operatives in secret for decades

For years, America’s most iconic gun-makers turned over sensitive personal information on hundreds of thousands of customers to political operatives.

Those operatives, in turn, secretly employed the details to rally firearm owners to elect pro-gun politicians running for Congress and the White House, a ProPublica investigation has found.

The clandestine sharing of gun buyers’ identities — without their knowledge and consent — marked a significant departure for an industry that has long prided itself on thwarting efforts to track who owns firearms in America.

At least 10 gun industry businesses, including Glock, Smith & Wesson, Remington, Marlin and Mossberg, handed over names, addresses and other private data to the gun industry’s chief lobbying group, the National Shooting Sports Foundation. The NSSF then entered the gun owners’ details into what would become a massive database.

The data initially came from decades of warranty cards filled out by customers and returned to gun manufacturers for rebates and repair or replacement programs.

A ProPublica review of dozens of warranty cards from the 1970s through today found that some promised customers their information would be kept strictly confidential. Others said some information could be shared with third parties for marketing and sales. None of the cards informed buyers their details would be used by lobbyists and consultants to win elections.

[…]

The undisclosed collection of intimate gun owner information is in sharp contrast with the NSSF’s public image.

[…]

For two decades, the group positioned itself as an unwavering watchdog of gun owner privacy. The organization has raged against government and corporate attempts to amass information on gun buyers. As recently as this year, the NSSF pushed for laws that would prohibit credit card companies from creating special codes for firearms dealers, claiming the codes could be used to create a registry of gun purchasers.

As a group, gun owners are fiercely protective about their personal information. Many have good reasons. Their ranks include police officers, judges, domestic violence victims and others who have faced serious threats of harm.

In a statement, the NSSF defended its data collection. Any suggestion of “unethical or illegal behavior is entirely unfounded,” the statement said, adding that “these activities are, and always have been, entirely legal and within the terms and conditions of any individual manufacturer, company, data broker, or other entity.”

The gun industry companies either did not respond to ProPublica or declined to comment, noting they are under different ownership today and could not find evidence that customer information was previously shared. One ammunition maker named in the NSSF documents as a source of data said it never gave the trade group or its vendors any “personal information.”

ProPublica established the existence of the secret program after reviewing tens of thousands of internal corporate and NSSF emails, reports, invoices and contracts. We also interviewed scores of former gun executives, NSSF employees, NRA lobbyists and political consultants in the U.S. and the United Kingdom.

The insider accounts and trove of records lay bare a multidecade effort to mobilize gun owners as a political force. Confidential information from gun customers was central to what NSSF called its voter education program. The initiative involved sending letters, postcards and later emails to persuade people to vote for the firearms industry’s preferred political candidates. Because privacy laws shield the names of firearm purchasers from public view, the data NSSF obtained gave it a unique ability to identify and contact large numbers of gun owners or shooting sports enthusiasts.

It also allowed the NSSF to figure out whether a gun buyer was a registered voter. Those who weren’t would be encouraged to register and cast their ballots for industry-supported politicians.

From 2000 to 2016, the organization poured more than $20 million into its voter education campaign, which was initially called Vote Your Sport and today is known as GunVote. The NSSF trumpeted the success of its electioneering in reports, claiming credit for putting both George W. Bush and Donald J. Trump in the White House and firearm-friendly lawmakers in the U.S. House and Senate.

In April 2016, a contractor on NSSF’s voter education project delivered a large cache of data to Cambridge Analytica

[…]

The data given to Cambridge included 20 years of gun owners’ warranty card information as well as a separate database of customers from Cabela’s, a sporting goods retailer with approximately 70 stores in the U.S. and Canada.

Cambridge combined the NSSF data with a wide array of sensitive particulars obtained from commercial data brokers. It included people’s income, their debts, their religion, where they filled prescriptions, their children’s ages and purchases they made for their kids. For women, it revealed intimate elements such as whether the underwear and other clothes they purchased were plus size or petite.

The information was used to create psychological profiles of gun owners and assign scores to behavioral traits, such as neuroticism and agreeableness. The profiles helped Cambridge tailor the NSSF’s political messages to voters based on their personalities.

[…]

As the body count from mass shootings at schools and elsewhere in the nation has climbed, those politicians have halted proposals to resurrect the assault weapons ban and enact other gun control measures, even those popular with voters, such as raising the minimum age to buy an assault rifle from 18 to 21.

In response to questions from ProPublica, the NSSF acknowledged it had used the customer information in 2016 for “creating a data model” of potentially sympathetic voters. But the group said the “existence and proven success of that model then obviated the need to continue data acquisition via private channels and today, NSSF uses only commercial-source data to which the data model is then applied.”

[…]

Source: Iconic Gun-Makers Gave Sensitive Customer Information to Political Operatives — ProPublica

Brain implant does thought to speech

Marking a breakthrough in the field of brain-computer interfaces (BCIs), a team of researchers from UC Berkeley and UC San Francisco has unlocked a way to restore naturalistic speech for people with severe paralysis.

This work solves the long-standing challenge of latency in speech neuroprostheses, the time lag between when a subject attempts to speak and when sound is produced. Using recent advances in artificial intelligence-based modeling, the researchers developed a streaming method that synthesizes brain signals into audible speech in near-real time.

As reported today in Nature Neuroscience, this technology represents a critical step toward enabling communication for people who have lost the ability to speak. […]

we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming. The result is more naturalistic, fluent speech synthesis.

[…]

The researchers also showed that their approach can work well with a variety of other brain sensing interfaces, including microelectrode arrays (MEAs) in which electrodes penetrate the brain’s surface, or non-invasive recordings (sEMG) that use sensors on the face to measure muscle activity.

“By demonstrating accurate brain-to-voice synthesis on other silent-speech datasets, we showed that this technique is not limited to one specific type of device,” said Kaylo Littlejohn, Ph.D. student at UC Berkeley’s Department of Electrical Engineering and Computer Sciences and co-lead author of the study. “The same algorithm can be used across different modalities provided a good signal is there.”

[…]

the neuroprosthesis works by sampling neural data from the motor cortex, the part of the brain that controls speech production, then uses AI to decode brain function into speech.

“We are essentially intercepting signals where the thought is translated into articulation and in the middle of that motor control,” he said. “So what we’re decoding is after a thought has happened, after we’ve decided what to say, after we’ve decided what words to use and how to move our vocal-tract muscles.”

[…]

 

Source: Brain-to-voice neuroprosthesis restores naturalistic speech – Berkeley Engineering

Neurotech companies are selling your brain data, senators warn

Three Democratic senators are sounding the alarm over brain-computer interface (BCI) technologies’ ability to collect — and potentially sell — our neural data. In a letter to the Federal Trade Commission (FTC), Sens. Chuck Schumer (D-NY), Maria Cantwell (D-IN), and Ed Markey (D-MA) called for an investigation into neurotechnology companies’ handling of user data, and for tighter regulations on their data-sharing policies.

“Unlike other personal data, neural data — captured directly from the human brain — can reveal mental health conditions, emotional states, and cognitive patterns, even when anonymized,” the letter reads. “This information is not only deeply personal; it is also strategically sensitive.”

While the concept of neural technologies may conjure up images of brain implants like Elon Musk’s Neuralink, there are far less invasive — and less regulated — neurotech products on the market, including headsets that help people meditate, purportedly trigger lucid dreaming, and promise to help users with online dating by helping them swipe through apps “based on your instinctive reaction.” These consumer products gobble up insights about users’ neurological data — and since they aren’t categorized as medical devices, the companies behind them aren’t barred from sharing that data with third parties.

“Neural data is the most private, personal, and powerful information we have—and no company should be allowed to harvest it without transparency, ironclad consent, and strict guardrails. Yet companies are collecting it with vague policies and zero transparency,” Schumer told The Verge via email.

The letter cites a 2024 report by the Neurorights Foundation, which found that most neurotech companies not only have few safeguards on user data but also have the ability to share sensitive information with third parties. The report looked at the data policies of 30 consumer-facing BCI companies and found that all but one “appear to have access to” users’ neural data, “and provide no meaningful limitations to this access.” The Neurorights Foundation only surveyed companies whose products are available to consumers without the help of a medical professional; implants like those made by Neuralink weren’t among them.

The companies surveyed by the Neurorights Foundation make it difficult for users to opt out of having their neurological data shared with third parties. Just over half the companies mentioned in the report explicitly let consumers revoke consent for data processing, and only 14 of the 30 give users the ability to delete their data. In some instances, user rights aren’t universal — for example, some companies only let users in the European Union delete their data but don’t grant the same rights to users elsewhere in the world.

To safeguard against potential abuses, the senators are calling on the FTC to:

  • investigate whether neurotech companies are engaging in unfair or deceptive practices that violate the FTC Act
  • compel companies to report on data handling, commercial practices, and third-party access
  • clarify how existing privacy standards apply to neural data
  • enforce the Children’s Online Privacy Protection Act as it relates to BCIs
  • begin a rulemaking process to establish safeguards for neural data, and setting limits on secondary uses like AI training and behavioral profiling
  • and ensure that both invasive and noninvasive neurotechnologies are subject to baseline disclosure and transparency standards, even when the data is anonymized

Though the senators’ letter calls out Neuralink by name, Musk’s brain implant tech is already subject to more regulations than other BCI technologies. Since Neuralink’s brain implant is considered a “medical” technology, it’s required to comply with the Health Insurance Portability and Accountability Act (HIPAA), which safeguards people’s medical data.

Stephen Damianos, the executive director of the Neurorights Foundation, said that HIPAA may not have entirely caught up to existing neurotechnologies, especially with regards to “informed consent” requirements.

“There are long-established and validated models for consent from the medical world, but I think there’s work to be done around understanding the extent to which informed consent is sufficient when it comes to neurotechnology,” Damianos told The Verge. “The analogy I like to give is, if you were going through my apartment, I would know what you would and wouldn’t find in my apartment, because I have a sense of what exactly is in there. But brain scans are overbroad, meaning they collect more data than what is required for the purpose of operating a device. It’s extremely hard — if not impossible — to communicate to a consumer or a patient exactly what can today and in the future be decoded from their neural data.”

Data collection becomes even trickier for “wellness” neurotechnology products, which don’t have to comply with HIPAA, even when they advertise themselves as helping with mental health conditions like depression and anxiety.

Damianos said there’s a “very hazy gray area” between medical devices and wellness devices.

“There’s this increasingly growing class of devices that are marketed for health and wellness as distinct from medical applications, but there can be a lot of overlap between those applications,” Damianos said. The dividing line is often whether a medical intermediary is required to help someone obtain a product, or whether they can “just go online, put in your credit card, and have it show up in a box a few days later.”

There are very few regulations on neurotechnologies advertised as being for “wellness.” In April 2024, Colorado passed the first-ever legislation protecting consumers’ neural data. The state updated its existing Consumer Protection Act, which protects users’ “sensitive data.” Under the updated legislation, “sensitive data” now includes “biological data” like biological, genetic, biochemical, physiological, and neural information. And in September, California amended its Consumer Privacy Act to protect neural data.

“We believe in the transformative potential of these technologies, and I think sometimes there’s a lot of doom and gloom about them,” Damianos told The Verge. “We want to get this moment right. We think it’s a really profound moment that has the potential to reshape what it means to be human. Enormous risks come from that, but we also believe in leveraging the potential to improve people’s lives.”

Source: Neurotech companies are selling your brain data, senators warn | The Verge

AI Helps Unravel a Cause of Alzheimer’s Disease and Identify a Therapeutic Candidate

graphic showing how Alzheimer's severity increases with PHGDH expression

A new study found that a gene recently recognized as a biomarker for Alzheimer’s disease is actually a cause of it, due to its previously unknown secondary function. Researchers at the University of California San Diego used artificial intelligence to help both unravel this mystery of Alzheimer’s disease and discover a potential treatment that obstructs the gene’s moonlighting role.

[…]

hong and his team took a closer look at phosphoglycerate dehydrogenase (PHGDH), which they had previously discovered as a potential blood biomarker for early detection of Alzheimer’s disease. In a follow-up study, they later found that expression levels of the PHGDH gene directly correlated with changes in the brain in Alzheimer’s disease; in other words, the higher the levels of protein and RNA produced by the PHGDH gene, the more advanced the disease.

[…]

Using mice and human brain organoids, the researchers found that altering the amounts of PHGDH expression had consequential effects on Alzheimer’s disease: lower levels corresponded to less disease progression, whereas increasing the levels led to more disease advancement. Thus, the researchers established that PHGDH is indeed a causal gene to spontaneous Alzheimer’s disease.

In further support of that finding, the researchers determined—with the help of AI—that PHGDH plays a previously undiscovered role: it triggers a pathway that disrupts how cells in the brain turn genes on and off. And such a disturbance can cause issues, like the development of Alzheimer’s disease.

[…]

another Alzheimer’s project in his lab, which did not focus on PHGDH, changed all this. A year ago, that project revealed a hallmark of Alzheimer’s disease: a widespread imbalance in the brain in the process where cells control which genes are turned on and off to carry out their specific roles.

The researchers were curious if PHGDH had an unknown regulatory role in that process, and they turned to modern AI for help.

With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure that is very similar to a known DNA-binding domain in a class of known transcription factors. The similarity is solely in the structure and not in the protein sequence.>

Zhong said, “It really demanded modern AI to formulate the three-dimensional structure very precisely to make this discovery.”

After discovering the substructure, the team then demonstrated that with it, the protein can activate two critical target genes. That throws off the delicate balance, leading to several problems and eventually the early stages of Alzheimer’s disease. In other words, PHGDH has a previously unknown role, independent of its enzymatic function, that through a novel pathway leads to spontaneous Alzheimer’s disease.

That ties back to the team’s earlier studies: the PHGDH gene produced more proteins in the brains of Alzheimer’s patients compared to the control brains, and those increased amounts of the protein in the brain triggered the imbalance. While everyone has the PHGDH gene, the difference comes down to the expression level of the gene, or how many proteins are made by it.

[…]

Given that PHGDH is such an important enzyme, there are past studies on its possible inhibitors. One small molecule, known as NCT-503, stood out to the researchers because it is not quite effective at impeding PHGDH’s enzymatic activity (the production of serine), which they did not want to change. NCT-503 is also able to penetrate the blood-brain-barrier, which is a desirable characteristic.

They turned to AI again for three-dimensional visualization and modeling. They found that NCT-503 can access that DNA-binding substructure of PHGDH, thanks to a binding pocket. With more testing, they saw that NCT-503 does indeed inhibit PHGDH’s regulatory role.

When the researchers tested NCT-503 in two mouse models of Alzheimer’s disease, they saw that it significantly alleviated Alzheimer’s progression. The treated mice demonstrated substantial improvement in their memory and anxiety tests. These tests were chosen because Alzheimer’s patients suffer from cognitive decline and increased anxiety.

The researchers do acknowledge limitations of their study. One being that there is no perfect animal model for spontaneous Alzheimer’s disease. They could test NCT-503 only in the mouse models that are available, which are those with mutations in those known disease-causing genes.

Still, the results are promising, according to Zhong.

[…]

Source: AI Helps Unravel a Cause of Alzheimer’s Disease and Identify a Therapeutic Candidate

Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership?

Beyond sharing a pair of vowels, AI and the EU both present significant challenges when it comes to setting the right course. This article makes the case that reducing regulation for large general-purpose AI providers under the EU’s competitiveness agenda is not a silver bullet for catching Europe up to the US and China, and would only serve to entrench European dependencies on US tech. Instead, by combining its regulatory toolkit and ambitious investment strategy, the EU is uniquely positioned to set the global standard for trustworthy AI and pursue its tech sovereignty. It is an opportunity that Europe must take.

Recent advances in AI have drastically shortened the improvement cycle from years to months, thanks to new inference-time compute techniques that enable self-prompting, chain-of-thought reasoning in models like OpenAI’s o1 and DeepSeek’s R1. However, these rapid gains also increase risks like AI-enabled cyber offenses and biological attacks. Meanwhile, the EU and France recently committed €317 billion to AI development in Europe, joining a global race with comparably large announcements from both the US and China.

Now turning to EU AI policy, the newly established AI Office and 13 independent experts are nearing the end of a nine-month multistakeholder drafting process of the Code of Practice (CoP); the voluntary technical details of the AI Act’s mandatory provisions for general purpose AI providers. The vast majority of the rules will apply to only the largest model providers, ensuring proportionality: the protection of SMEs, start-ups, and other downstream industries. In the meantime, the EU has fully launched a competitiveness agenda, with the Commission’s recently published Competitiveness Compass and first omnibus simplification package outlining plans for widespread streamlining of reporting obligations amidst mounting pushback against this simplified narrative. Add to this the recent withdrawal of the AI Liability Directive, and it’s clear to see which way the political winds are blowing.

So why must this push for simplification be replaced by a push for trustworthy market creation in the case of general-purpose AI and the Code of Practice? I’ll make three main points: 1) Regulation is not the reason for Europe lacking Big Tech companies, 2) Sweeping deregulation creates legal uncertainty and liability risks for downstream deployers, and slows trusted adoption of new technologies and thereby growth, 3) Watering down the CoP for upstream model providers with systemic risk will almost exclusively benefit large US incumbents, entrenching dependency and preventing tech sovereignty.

[…]

The EU’s tech ecosystem had ample time to emerge in the years preceding and following the turn of the century, free of so-called “red tape,” but this did not happen and will not again through deregulation […] One reason presented by Bradford is that the European digital single market still remains fragmented, with differing languages, cultures, consumer preferences, administrative obstacles, and tax regimes preventing large tech companies from seamlessly growing within the bloc and throughout the world. Even more fragmented are the capital markets of the EU, resulting in poor access to venture capital for tech start-ups and scale-ups. Additional points include harsh, national-level bankruptcy laws that are “creditor-oriented” in the EU, compared to more forgiving “debtor-friendly” equivalents in the US, resulting in lower risk appetite for European entrepreneurs. Finally, skilled migration is significantly more streamlined in the US, with federal-level initiatives like the H-1B visa leading to the majority of Big Tech CEOs hailing from overseas

[…]

The downplaying of regulation as Europe’s AI hindrance has been repeated by leading industry voices such as US VC firm a16z, European VC firm Merantix Capital, and French provider MistralAI. To reiterate: the EU ‘lagging behind’ on trillion-dollar tech companies and the accompanying innovation was not a result of regulation before there was regulation, and is also not a result of regulation after.

[…]

Whether for planes, cars, or drugs, early use of dangerous new technologies, without accompanying rules, saw frequent preventable accidents, reducing consumer trust and slowing market growth. Now, with robust checks and balances in place from well-resourced regulatory authorities, such markets have been able to thrive, providing value and innovation to citizens. Other sectors, like nuclear energy and, more recently, crypto, have suffered from an initial lack of regulation, causing industry corner-cutting, leading to infamous disasters (from Fukushima to the collapse of FTX) from which public trust has been difficult to win back. Regulators around the world are currently risking the same fate for AI.

This point is particularly relevant for so-called ‘downstream deployers’: companies that build applications on top of (usually) Big Tech, provided underlying models. Touted by European VC leader Robert Lacher as Europe’s “huge opportunity” in AI, downstream deployers, particularly SMEs, serve to gain from the Code of Practice, which ensures that necessary regulatory checks and balances occur upstream at the level of model provider.

[…]

Finally, the EU’s enduring and now potentially crippling dependency on US technology companies has been importantly addressed by the new Commission, best exemplified by the title of Executive Vice President Henna Virkkunen’s file: Tech Sovereignty, Security and Democracy. With the last few months’ geopolitical developments, including all-time-low transatlantic relations and an unfolding trade war, some have gone as far as warning of the possibility of US technology being used for surveillance of Europe and of the US sharing intelligence with Russia. Clearly, the urgency of tech sovereignty has drastically increased. A strong Code of Practice would return agency to the EU, ensuring that US upstream incumbents meet basic security, safety, and ethical standards whilst also easing the EU’s AI adoption problem by ensuring technology is truly trustworthy.

So, concretely, what needs to be done? Bruegel economist Mario Mariniello summed it up concisely: “On tech regulation, the European Union should be bolder.”

[…]

This article has outlined why deregulating highly capable AI models, produced by the world’s largest companies, is not a solution to Europe’s growth problem. Instead of stripping back obligations, ensuring protections of European citizens, the EU must combine its ambitious AI investment plan with boldly pursuing leadership in setting global standards, accelerating trustworthy adoption and ensuring tech sovereignty. This combination will put Europe on the right path to drive this technological revolution forward for the benefit of all.

Source: Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership? | TechPolicy.Press

Europe’s Tech Sovereignty Demands More Than Competitiveness

BRUSSELS – As part of his confrontational stance toward Europe, US President Donald Trump could end up weaponizing critical technologies. The European Union must appreciate the true nature of this threat instead of focusing on competing with the US as an economic ally. To achieve true tech sovereignty, the EU should transcend its narrow focus on competitiveness and deregulation and adopt a far more ambitious strategy

[…]

Europe’s growing anxiety about competitiveness is fueled by its inability to challenge US-based tech giants where it counts: in the market. As the Draghi report points out, the productivity gap between the United States and the EU largely reflects the relative weakness of Europe’s tech sector. Recent remarks by European Commission President Ursula von der Leyen and Tech Commissioner Henna Virkkunen suggest that policymakers have taken Draghi’s message to heart, making competitiveness the central focus of EU tech policy. But this singular focus is both insufficient and potentially counterproductive at a time of technological and geopolitical upheaval. While pursuing competitiveness could reduce Big Tech’s influence over Europe’s economy and democratic institutions, it could just as easily entrench it. European leaders’ current fixation on deregulationturbocharged by the Draghi report – leaves EU policymaking increasingly vulnerable to lobbying by powerful corporate interests and risks legitimizing policies that are incompatible with fundamental European values.

As a result, the European Commission’s deregulatory measures – including its recent decision to shelve draft AI and privacy rules, and its forthcoming “simplification” of tech legislation including the GDPR – are more likely to benefit entrenched tech giants than they are to support startups and small and medium-size enterprises. Meanwhile, Europe’s hasty and uncritical push for “AI competitiveness” risks reinforcing Big Tech’s tightening grip on the AI technology stack.

It should come as no surprise that the Draghi report’s deregulatory agenda was warmly received in Silicon Valley, even by Elon Musk himself. But the ambitions of some tech leaders go far beyond cutting red tape. Musk’s use of X (formerly Twitter) and Starlink to interfere in national elections and the war in Ukraine, together with the Trump administration’s brazen attacks on EU tech regulation, show that Big Tech’s quest for power poses a serious threat to European sovereignty.

Europe’s most urgent task, then, is to defend its citizens’ rights, sovereignty, and core values from increasingly hostile American tech giants and their allies in Washington. The continent’s deep dependence on US-controlled digital infrastructure – from semiconductors and cloud computing to undersea cables – not only undermines its competitiveness by shutting out homegrown alternatives but also enables the owners of that infrastructure to exploit it for profit.

[…]

Strong enforcement of competition law and the Digital Markets Act, for example, could curb Big Tech’s influence while creating space for European startups and challengers to thrive. Similarly, implementing the Digital Services Act and the AI Act will protect citizens from harmful content and dangerous AI systems, empowering Europe to offer a genuine alternative to Silicon Valley’s surveillance-driven business models. Against this backdrop, efforts to develop homegrown European alternatives to Big Tech’s digital infrastructure have been gaining momentum. A notable example is the so-called “Eurostack” initiative, which should be viewed as a key step in defending Europe’s ability to act independently.

[…]

A “competitive” economy holds little value if it comes at the expense of security, a fair and safe digital environment, civil liberties, and democratic values. Fortunately, Europe doesn’t have to choose. By tackling its technological dependencies, protecting democratic governance, and upholding fundamental rights, it can foster the kind of competitiveness it truly needs.

Source: Europe’s Tech Sovereignty Demands More Than Competitiveness by Marietje Schaake & Max von Thun – Project Syndicate

Deregulation has led to huge amounts of problems globally, such as the monopoly / duopoly problems we can’t seem to deal with; reliance on external markets and companies that whimsically change their minds; unsustainable hardware and software choices allowing devices to be bricked, poorly secured and irreparable; vendor lock-in to closed source ecosystems; damage to innovation; privacy invasions which lead to hacking attacks; etc etc. As Europe we can make our own choices about our own values – we are not determined by the singular motive of profit. European values are inclusive and also promote things like education and happiness.

Content Moderation or Security Theater? How Social Media Platforms Really Enforce Community Guidelines

[…] To better understand how major platforms moderate content, we studied and compared the community guidelines of Meta, TikTok, YouTube, and X.

We must note that platforms’ guidelines often evolve, so the information used in this study is based only on the latest available data at the time of publication. Moreover, the strictness and regularity of policy implementation may vary per platform.

Content Moderation

We were able to categorize 3 main methods of content moderation in major platforms’ official policies: AI-based enforcement, human or staff review, and user reporting.

Content moderation practices (AI enforcement, human review, and user reporting) across Meta, TikTok, YouTube, and X

Notably, TikTok is the only platform that doesn’t officially employ all 3 content moderation methods. It only clearly defines the process of user reporting, although it mentions that it relies on a “combination of safety approaches.” Content may go through an automated review, especially those from accounts with previous violations, and human moderation when necessary.

Human or staff review and AI enforcement are observed in the other 3 platforms’ policies. In most cases, the platforms claim to employ the methods hand-in-hand. YouTube and X (formerly Twitter) describe using a combination of machine learning and human reviewers. Meta has a unique Oversight Board that manages more complicated cases.

Criteria for

Banning Accounts

Meta TikTok YouTube X
Severe Single Violation
Repeated Violations
Circumventing Enforcement

All platform policies include the implementation of account bans for repeat or single “severe” violations. Of the 4 platforms, TikTok and X are the only ones to include circumventing moderation enforcement as additional grounds for account banning.

Content Restrictions

Age Restrictions Adult Content Gore Graphic Violence
Meta 10-12 (supervised), 13+ Allowed with conditions Allowed with conditions Allowed with conditions
TikTok 13+ Prohibited Allowed with conditions Prohibited
YouTube Varies Prohibited Prohibited Prohibited
X 18+ Allowed (with labels) Allowed with conditions Prohibited

Content depicting graphic violence is the most widely prohibited in platforms’ policies, with only Meta allowing it with conditions (the content must be “newsworthy” or “professional”).

Adult content is also heavily moderated per the official community guidelines. X allows them given there are adequate labels, while other platforms restrict any content with nudity or sexual activity that isn’t for educational purposes.

YouTube is the only one to impose a blanket prohibition on gory or distressing materials. The other platforms allow such content but might add warnings for users.

Policy strictness across platforms, ranked from least (1) to most (5) strict across 6 categories

All platforms have a zero-tolerance policy for content relating to child exploitation. Other types of potentially unlawful content — or those that threaten people’s lives or safety — are also restricted with varying levels of strictness. Meta allows discussions of crime for awareness or news but prohibits advocating for or coordinating harm.

Other official metrics for restriction include the following:

Platforms' official community guidelines regarding free speech vs. fact-checking, news and education, and privacy and security

What Gets Censored the Most?

Overall, major platforms’ community and safety guidelines are generally strict and clear regarding what’s allowed or not. However, what content moderation looks like in practice may be very different.

We looked at censorship patterns for videos on major social media platforms, including Instagram Reels, TikTok, Facebook Reels, YouTube Shorts, and X.

The dataset considered a wide variety of videos, ranging from entertainment and comedy to news, opinion, and true crime. Across the board, the types of content we observed to be most commonly censored include:

  • Profanity: Curse words were censored via audio muting, bleeping, or subtitle redaction.
  • Explicit terms: Words pertaining to sexual activity or self-harm were omitted or replaced with alternative spellings.
  • Violence and conflict: References to weapons, genocide, geopolitical conflicts, or historical violence resulted in muted audio, altered captions, or warning notices, especially on TikTok and Instagram.
  • Sexual abuse: Content related to human trafficking and sexual abuse had significant censorship, often requiring users to alter spellings (e.g., “s3x abuse” or “trffcked”).
  • Racial slurs: Some instances of censored racial slurs were found in rap music videos on TikTok and X.

Pie charts showing the types of content censored and censorship methods observed across platforms

Instagram seems to heavily censor explicit language, weapons, and sexual content, mostly through muting and subtitle redaction. Content depicting war, conflict, graphic deaths and injuries, or other potentially distressing materials often require users to click through a “graphic content” warning before being able to view the image or video.

Facebook primarily censors profanity and explicit terms through audio bleeping and subtitle removal. However, some news-related posts are able to retain full details.

On the other hand, TikTok uses audio censorship and alters captions. As such, many creators regularly use coded language when discussing sensitive topics. YouTube also employs similar filters, muting audio or blurring visuals extensively to hide profanity and explicit words or graphics. However, it still allows offensive words in some contexts (educational, scientific, etc.).

X combines a mix of redactions, visual blurring, and muted audio. Profanity and graphic violence are sometimes left uncensored, but sensitive content will typically get flagged or blurred, especially once reported by users.

Censorship Method Platforms Using It Description/Example
Muted or Bleeped Audio Instagram, TikTok, Facebook, YouTube, X Profanity, explicit terms, and violence-related speech altered or omitted
Redacted or Censored Subtitles Instagram, TikTok, Facebook, X Sensitive words (e.g., words like “n*****,” “fu*k,” and “traff*cked”) altered or omitted
Blurred Video or Images Instagram, Facebook, X Sensitive content (e.g., death and graphic injuries) blurred and labeled with a warning

News and Information Accounts

Our study confirmed that news outlets and credible informational accounts are sometimes subject to different moderation standards.

Posts on Instagram, YouTube, and X (from accounts like CNN or BBC) discussing war or political violence were only blurred and presented with an initial viewing warning, but they were not muted or altered in any way. Meanwhile, user-generated content discussing similar topics faced audio censorship.

On the other hand, comedic and entertainment posts still experienced strict regulations on profanity, even on news outlets. This suggests that humor and artistic contexts likely don’t exempt content from moderation, regardless of the type of account or creator.

The Coded Language Workaround

A widespread workaround for censorship is the use of coded language to bypass automatic moderation. Below are some of the most common ones we observed:

  • “Fuck” → “fk,” “f@ck,” “fkin,” or a string of 4 special characters
  • “Ass” → “a$$,” “a**,” or “ahh”
  • “Gun” → “pew pew” or a hand gesture in lieu of saying the word
  • “Genocide” → “g*nocide”
  • “Sex” → “s3x,” “seggs,” or “s3ggs”
  • “Trafficking” → “tr@fficking,” or “trffcked”
  • “Kill” → “k-word”
  • “Dead” → “unalive”
  • “Suicide” → “s-word,” or “s**cide”
  • “Porn” → “p0rn,” “corn,” or corn emoji
  • “Lesbian” → “le$bian” or “le dollar bean”
  • “Rape” → “r@pe,” “grape,” or grape emoji

This is the paradox of modern content moderation: how effective are “strict” guidelines when certain types of accounts are occasionally exempt from them and other users can exploit simple loopholes?

Since coded words are widely and easily understood, it suggests that AI-based censorship mainly filters out direct violations rather than stopping or removing sensitive discussions altogether.

Is Social Media Moderation Just Security Theater?

Overall, it’s clear that platform censorship for content moderation is enforced inconsistently.

Given that our researchers are also subject to the algorithmic biases of the platforms tested, and we’re unlikely to be able to interact with shadowbanned accounts, we can’t fully quantify or qualify the extent of restrictions that some users suffer for potentially showing inappropriate content.

However, we know that many creators are able to circumvent or avoid automated moderation. Certain types of accounts receive preferential treatment in terms of restrictions. Moreover, with social media apps’ heavy reliance on AI moderation, users are able to evade restrictions with the slightest modifications or substitutions.

Are Platforms Capable of Implementing Strict Blanket Restrictions on “Inappropriate” Content?

Especially with how most people rely on social media to engage with the world, it could be considered impractical or even ineffective to try and restrict sensitive conversations. This is particularly true when contexts are excluded, and restrictions focus solely on keywords, which is often the case for automated moderation.

Also, one might ponder whether content restrictions are primarily in place for liability protection instead of user safety — especially if platforms know about the limitations of AI-based moderation but continue to use it as their primary means of enforcing community guidelines.

Are Social Media Platforms Deliberately Performing Selective Moderation?

At the beginning of 2025, Meta made waves after it announced that it would be removing fact-checkers. Many suggested that this change was influenced by the seemingly new goodwill between its founder and CEO, Mark Zuckerberg, and United States President Donald Trump.

Double standards are also apparent in other platforms whose owners have clear political ties. Elon Musk, a popular supporter and backer of Trump, has been reported to spread misinformation about government spending — posting or reposting false claims on X, the platform he owns.

This is despite the platform’s guidelines clearly prohibiting “media that may result in widespread confusion on public issues, impact public safety, or cause serious harm.”

Given the seemingly one-sided implementation of policies on different social media sites, we believe individuals and organizations must practice careful scrutiny when consuming media or information on these platforms.

Community guidelines aren’t fail-safes for ensuring safe, uplifting, and constructive spaces online. We believe that what AI algorithms or fact-checkers consider safe shouldn’t be seen as the standard or universal truth. That is, not all restricted posts are automatically “harmful,” the same way not all retained posts are automatically true or reliable.

Ultimately, the goal of this study is to help digital marketers, social media professionals, journalists, and the general public learn more about the evolving mechanics of online expression. With the insights gathered from this research, we hope to spark conversation about the effectiveness and fairness of content moderation in the digital space.

[…]

Source: Content Moderation or Security Theater? How Social Media Platforms Really Enforce Community Guidelines

1 Million customers from French Boulanger’s Customers Exposed Online for free

In a recent discovery, SafetyDetectives’ Cybersecurity Team stumbled upon a clear web forum post where a threat actor publicized a database allegedly belonging to Boulanger Electroménager & Multimédia purportedly exposing 5 Million of their customers.

What is Boulanger Electroménager & Multimédia?

Boulanger Electroménager & Multimédia is a French company that specializes in the sale of household appliances and multimedia products.

Founded in 1954, according to their website, Boulanger has physical stores and delivers its products to clients across France. The company also offers an app, which has over 1 million downloads on the Google Play Store and Apple’s App Store.

Where Was The Data Found?

The data was found in a forum post available on the clear surface web. This well-known forum operates message boards dedicated to database downloads, leaks, cracks, and more.

What Was Leaked?

The author of the post included two links to the unparsed and clean datasets, which purportedly belong to Boulanger. They claim the unparsed dataset consists of a 16GB .JSON file with 27,561,591 million records, whereas the clean dataset is comprised of a 500MB .CSV file with 5 million records.

Links to both datasets were hidden and set to be shown after giving a like or leaving a comment on the post. As a result, the data was set to be unlocked for free by anyone with an account on the forum who was willing to simply interact with the post.

Our Cybersecurity Team reviewed part of the datasets to assess their authenticity, and we can confirm that the data appears to be legitimate. After running a comparative analysis, it seems like these datasets correspond to the purportedly stolen data from the 2024 cyberincident.

Back in September 2024, Boulanger was one of the targets of a ransomware attack that also affected other retailers, such as Truffaut and Cultura. A threat author with the nickname “horrormar44” claimed responsibility for the breach.

At the time, the data was offered on a different well-known clear web forum — which is currently offline — at a price of €2,000. Although there allegedly were some potential buyers, it is unclear if the sale was actually finalized. In any case, it seems the data has resurfaced now as free to download.

While reviewing the data, we found that the clean dataset contains just over 1 million rows containing one customer per row and includes some duplicates. While that’s still a considerable number of customers, it’s far smaller than the 5 million claimed by the author of the post.

The sensitive information allegedly belonging to Boulanger’s customers included:

  • Name
  • Surname
  • Full physical address
  • Email address
  • Phone number

[….]

Source: 27 Million Records from French Boulanger’s Customers Allegedly Exposed Online

Google turns early Nest Thermostats into dumb thermostats

Google has just announced that it’s ending software updates for the first-generation Nest Learning Thermostat, released in 2011, and the second-gen model that came a year later. This decision also affects the European Nest Learning Thermostat from 2014. “You will no longer be able to control them remotely from your phone or with
Google Assistant, but can still adjust the temperature and modify schedules directly on the thermostat,“ the company wrote in a Friday blog post.

[…]

Google is flatly stating that it has no plans to release additional Nest thermostats in Europe. “Heating systems in Europe are unique and have a variety of hardware and software requirements that make it challenging to build for the diverse set of homes,“ the company said. “The Nest Learning Thermostat (3rd gen, 2015) and Nest Thermostat E (2018) will continue to be sold in Europe while current supplies last.”

[…]

Source: Google is killing software support for early Nest Thermostats | The Verge

Yes, so in about a year they will be dumb thermostats too. I don’t think I would buy one of those then.

Microsoft mystery folder fix needs a fix of its own with simple POC

Turns out Microsoft’s latest patch job might need a patch of its own, again. This time, the culprit is a mysterious inetpub folder quietly deployed by Redmond, now hijacked by a security researcher to break Windows updates.

The folder, typically c:\inetpub, reappeared on Windows systems in April as part of Microsoft’s mitigation for CVE-2025-21204, an exploitable elevation-of-privileges flaw within Windows Process Activation. Rather than patching code directly, Redmond simply pre-created the folder to block a symlink attack path. For many administrators, the reappearance of this old IIS haunt raised eyebrows, especially since the mitigation did little beyond ensuring the folder existed.

For at least one security researcher, in this case Kevin Beaumont, the fix also presented an opportunity to hunt for more vulnerabilities. After poking around, he discovered that the workaround introduced a new flaw of its own, triggered using the mklink command with the /j parameter.

It’s a simple enough function. According to Microsoft’s documentation, mklink “creates a directory or file symbolic or hard link.” And with the /j flag, it creates a directory junction – a type of filesystem redirect.

Beaumont demonstrated this by running: “mklink /j c:\inetpub c:\windows\system32\notepad.exe.” This turned the c:\inetpub folder – precreated in Microsoft’s April 2025 update to block symlink abuse – into a redirect to a system executable. When Windows Update tried to interact with the folder, it hit the wrong target, errored out, and rolled everything back.

“So you just go without security updates,” he noted.

The kicker? No admin rights are required. On many default-configured systems, even standard users can run the same command, effectively blocking Windows updates without ever escalating privileges.

[…]

Source: Microsoft mystery folder fix might need a fix of its own • The Register

Employee monitoring app exposes 21M work screens​ to internet

A surveillance tool meant to keep tabs on employees is leaking millions of real-time screenshots onto the open web.

Your boss watching your screen isn’t the end of the story. Everyone else might be watching, too. Researchers at Cybernews have uncovered a major privacy breach involving WorkComposer, a workplace surveillance app used by over 200,000 people across countless companies.

The app, designed to track productivity by logging activity and snapping regular screenshots of employees’ screens, left over 21 million images exposed in an unsecured Amazon S3 bucket, broadcasting how workers go about their day frame by frame.

[…]

WorkComposer is one of many time-tracking tools that have crept into modern work culture. Marketed as a way to keep teams “accountable,” the software logs keystrokes, tracks how long you spend on each app, and snaps desktop screenshots every few minutes.

The leak shows just how dangerous this setup becomes when basic security hygiene is ignored. A leak of this magnitude turns everyday work activity into a goldmine for cybercriminals.

[…]

The leak’s real-time nature only amplifies the danger, as threat actors could monitor unfolding business operations as they happen, giving them access to otherwise locked-down environments.

[…]

Source: Employee monitoring app exposes 21M work screens​ | Cybernews

Can you imagine the time lost by employers going over all this data?

Microsoft Windows new built in spying tool Microsoft Recall is a really great idea too. Not.

Internet Archive Sued for $700m by Record Labels about digitising songs pre 1960. Petition to rescue the Internet Archive

A dramatic appeal hopes to ensure the survival of the nonprofit Internet Archive. The signatories of a petition, which is now open for further signatures, are demanding that the US recording industry association RIAA and participating labels such as as Universal Music Group (UMG), Capitol Records, Sony Music, and Arista drop their lawsuit against the online library. The legal dispute, pending since mid-2023 and expanded in March, centers on the “Great 78” project. This project aims to save 500,000 song recordings by digitizing 250,000 records from the period 1880 to 1960. Various institutions and collectors have donated the records, which are made for 78 revolutions per minute (“shellac”), so that the Internet Archive can put this cultural treasure online.

The music companies originally demanded Ã…372 million for the online publication of the songs and the associated “mass theft .” They recently increased their demand to Ã…700 million for potential copyright infringement. The basis for the lawsuit is the Music Modernization Act, which US President Donald Trump approved in 2018. This includes the CLASSICS Act. This law retroactively introduces federal copyright protection for sound recordings made before 1972, which until the were protected in the US by different state laws. The monopoly rights now apply US-wide for a good 100 years (for recordings made before 1946) or until 2067 (for recordings made between 1947 and 1972).

The lawsuit ultimately threatens the existence of the entire Internet Archive , including the wavy-known Wayback Machine , they say. This important public service is used by millions of people every day to access historical “snapshots” from the web. Journalists, educators, researchers, lawyers, and citizens use it to verify sources, investigate disinformation, and maintain public accountability. The legal attack also puts a “critical infrastructure of the internet” at risk. And this at a time when digital information is being deleted, overwritten, and destroyed: “We cannot afford to lose the tools that preserve memory and defend facts.” The Internet Archive was forced to delete 500,000 books as recently as 2024. It also continually struggles with IT attacks .

The case is called Universal Music Group et al. v. Internet Archive. The lawsuit was originally filed in the U.S. District Court for the Southern District of New York (Case No. 1:23-cv-07133), but is now pending in the U.S. District Court for the Northern District of California (Case No. 3:23-cv-6522). The Internet Archive takes the position that the Great 78 project does not harm the music industry. Quite the opposite: Anyone who wants to enjoy music uses commercial streaming services anyway; the old 78 rpm shellac recordings are study material for researchers.

Source: Suit of record labels: Petition to rescue the Internet Archive | heise online (NB this is a Google Translate page from the original German page)

Original page here: https://www.heise.de/news/Klage-von-Plattenlabels-Petition-zur-Rettung-des-Internet-Archive-10358777.html

How can copyright law be so incredibly wrong all the time?!

Australian Radio station uses AI host for 6 months before anyone notices

I got an interesting tipoff the other day that Sydney radio station CADA is using an AI avatar instead of an actual radio host.

The story goes that their workdays presenter – a woman called Thy – actually doesn’t exist. She’s a character made using AI, and rolled out onto CADA’s website.

[…]

What is Thy’s last name? Who is she? Where did she come from? There is no biography, or further information about the woman who is supposedly presenting this show.

Compare that to the (recently resigned) breakfast presenter Sophie Nathan or the drive host K-Sera. Both their show pages include multi-paragraph biographies which include details about their careers and various accolades. They both have a couple of different photos taken during various press shoots.

But perhaps the strangest thing about Thy is that she appears to be a young woman in her 20s who has absolutely no social media presence. This is particularly unusual for someone who works in the media, where the size of your audience is proportionate to your bargaining power in the industry.

There are no photos or videos of Thy on CADA’s socials, either. It seems she was photographed just once and then promptly turned invisible.

[…]

I decided to listen back to previous shows, using the radio archiving tool Flashback. Thy hasn’t been on air for the last fortnight. Before then, the closest thing to a radio host can be found just before the top of the hour. A rather mechanical-sounding female voice announces what songs are coming up. This person does not give her name, and none of the sweepers announce her or the show.

I noticed that on two different days, Thy announced ‘old school’ songs. On the 25th it was “old school Beyonce”, and then on the 26th it was “old school David Guetta”. Across two different days, the intonation was, I thought, strikingly similar.

To illustrate the point, I isolated the voice, and layered them on to audio tracks. There is a bit of interference from the imperfectly-removed song playing underneath the voice, but the host sounds identical in both instances.

Despite all this evidence, there’s still is a slim chance that Thy is a person. She might be someone who doesn’t like social media and is a bit shy around the office. Or perhaps she’s a composite of a couple of real people: someone who recorded her voice to be synthesised, another who’s licensing her image.

[…]

Source: Meet Thy – the radio host I don’t think exists

[…] An ARN spokesperson said the company was exploring how new technology could enhance the listener experience.

“We’ve been trialling AI audio tools on CADA, using the voice of Thy, an ARN team member. This is a space being explored by broadcasters globally, and the trial has offered valuable insights.”

However, it has also “reinforced the power of real personalities in driving compelling content”, the spokesperson added.

The Australian Financial Review reported that Workdays with Thy has been broadcast on CADA since November, and was reported to have reached at least 72,000 people in last month’s ratings.

[….]

CADA isn’t the first radio station to use an AI-generated host. Two years ago, Australian digital radio company Disrupt Radio introduced its own AI newsreader, Debbie Disrupt.

Source: AI host: ARN radio station CADA called out for failing to disclose AI host

Now both of these articles go off the rails about using AI and saying that the radio station should have disclosed that they were using an AI. There is absolutely no legal obligation to disclose this and I think it’s pretty cool that AI is progressing to the point that this can be done. So now if you want to be a broadcaster yourself you can enforce your station vision 24/7 – which you could never possibly do on your own.

ElevenLabs — a generative AI audio platform that transforms text into speech

And write, apparently. Someone needed to produce the “script” that the AI host used, which may also have had some AI involvement I suppose, but ultimately this seems to be just a glorified text to speech engine trying to cash in on the AI bubble. Or maybe they took it to the next logical step and just feed it a playlist and it generates the necessary “filler” from that and what it can find online from a search of the artist and title, plus some randoms chit chat from a (possibly) curated list of relevant current affairs articles.

Frankly, if people couldn’t tell for six months, then whatever they are doing is clearly good enough and the smarter radio DJs are probably already thinking about looking for other work or adding more interactive content like interviews into their shows. Talk Show type presenters probably have a little longer, but it’s probably just a matter of time for them too.

Source: https://radio.slashdot.org/comments.pl?sid=23674797&cid=65329681

A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information

[…]Yale New Haven Health (YNHHS), a massive nonprofit healthcare network in Connecticut. Hackers stole the data of more than 5.5 million individuals during an attack in March 2025.

[…]

According to a public notice on the YNHHS website, the organization discovered “unusual activity” on its system on March 8, 2025, which was later identified as unauthorized third-party access that allowed bad actors to copy certain patient data. While the information stolen varies by individual, it may include the following:

  • Name
  • Date of birth
  • Address
  • Phone number
  • Email address
  • Race
  • Ethnicity
  • Social Security number
  • Patient type
  • Medical record number

YNHHS says the breach did not include access to medical records, treatment information, or financial data (such as account and payment information).

[…]

Source: A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information | Lifehacker

Wait – race and ethnicity?!

Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads

CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads.

“That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you,” Srinivas said. “Because some of the prompts that people do in these AIs is purely work-related. It’s not like that’s personal.”

And work-related queries won’t help the AI company build an accurate-enough dossier.

“On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you,” he explained.

Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them.

“We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there,” he said.

The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said.

He’s not wrong, of course. Quietly following users around the internet helped Google become the roughly $2 trillion market cap company it is today.

[…]

Meta’s ad tracking technology, Pixels, which is embedded on websites across the internet, is how Meta gathers data, even on people that don’t have Facebook or Instagram accounts. Even Apple, which has marketed itself as a privacy protector, can’t resist tracking users’ locations to sell advertising in some of its apps by default.

On the other hand, this kind of thing has led people across the political spectrum in the U.S. and in Europe to distrust big tech.

The irony of Srinivas openly explaining his browser-tracking ad-selling ambitions this week also can’t be overstated.

Google is currently in court fighting the U.S. Department of Justice, which has alleged Google behaved in monopolistic ways to dominate search and online advertising. The DOJ wants the judge to order Google to divest Chrome.

Both OpenAI and Perplexity — not surprisingly, given Srinivas’ reasons — said they would buy the Chrome browser business if Google was forced to sell.

Source: Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads | TechCrunch

Yup, so even if Mozilla is making Firefox more invasive, it’s still better than these guys.

Study finding persistent chemical in European wines raises doubts and concerns

A report by the Pesticides Action Network (PAN Europe) and other NGOs that uncovered high concentrations of a forever chemical in wines from across the EU – including organic – is sparking debate about the causes of contamination and restrictions on the substance. 

The report found some wines had trifluoroacetic acid (TFA) levels 100 times higher than the strictest threshold for drinking water in Europe.

TFA is part of the PFAS (per- and polyfluoroalkyl) family of substances used in many products, including pesticides, for their water-repellent properties. Extremely persistent in the environment, they are a known threat to human health.

“This is a wake-up call,” said Helmut Burtscher-Schaden, an environmental chemist at Global 2000, one of the NGOs behind the research. “TFA is a permanent chemical and will not go away.” 

The NGOs analysed 49 wines. Comparing modern wines with older vintages, the findings suggested no detectable residues in pre-1988 wines but a sharp increase since 2010.  

“For no other agricultural product are the harvests from past decades so readily available and well-preserved,” the study said.

PAN sees a correlation between rising levels of TFA in wine and the growing use PFAS-based pesticides.

Under the spotlight

Though nearly a quarter of Austria’s vineyards are cultivated with the organic method, Austrian bottles are over-represented in the list of contaminated wines, 18 out of 49, as the NGOs started testing from the country before expanding the reach of the research.

[… Winemakers complain about the study, who would have thought…]

In response, the European executive’s officials passed the buck to member states, noting they resisted the Commission’s proposal to quit renewing certain PFAS pesticides. An eventual agreement was reached on just two substances.

More could be done to limit PFAS chemicals at the national level under the current EU legislation, Commission representatives said.

Source: Study finding persistent chemical in European wines raises doubts and concerns – Euractiv

Spacetop AR is now an expensive Windows app instead of a useless screenless laptop

The Spacetop AR laptop made a splash when it debuted a few years ago with an intriguing pitch: What if you could have a notebook that works entirely through augmented reality glasses, without a built-in screen of its own? Unfortunately, we found the Spacetop experience to be underwhelming, and the hardware seemed like a tough sell for $1,900. Last Fall, Spacetop’s creator Sightful told CNET that it was abandoning the screen-less laptop altogether and instead focusing on building AR software for Windows PCs. Now, we have a clearer sense of what Sightful is up to.

Today, Sightful is officially launching Spacetop for Intel-powered Windows AI PCs, following a short trial launch from January. For $899 you get a pair of XREAL’s Air Ultra 2 glasses and a year of Spacetop’s software. Afterwards, you’ll have to pay $200 annually for a subscription. The software works just like the original Spacetop concept — it gives you a large 100-inch AR interface for doing all of your productivity work — except now you’re not stuck with the company’s middling keyboard and other hardware.

[…]

Spacetop doesn’t support Intel chips without NPUs, as its AR interface requires constant AI processing. It doesn’t work AMD or Qualcomm’s AI CPUs, either.

[…]

In a conversation with Engadget, Sightful CEO Tamir Berliner noted that the company might pay more attention to other chip platforms if it gets similar attention.

[…]

you’ll have to get used to wearing Xreal’s large Air 2 Ultra glasses. When we demoed it at CES, we found it to be an improvement over previous Xreal frames, thanks to their sharp 1080p micro-OLED displays and wider field of view. The Air 2 Ultra are also notable for having 6DoF tracking, which allows you to move around AR objects. While sleeker than the Vision Pro, the glasses are still pretty clunky, and you’ll also have to snap in additional prescription frames if necessary.

I’ll need to see this latest iteration of Spacetop in action before making any final judgments, but it’s clearly a more viable concept as an app that can work on a variety of laptops. Nobody wants to buy bespoke hardware like the old Spacetop laptop, no matter how good of a party trick it may be.

Source: Spacetop AR is now an expensive Windows app instead of a useless screenless laptop

This looks like an excellent idea and one which I would love to get if it wasn’t tied so much to hardware and $200 per year.

EC fines Meta, Apple €700M for DMA compliance failures

Meta and Apple have earned the dubious honor of being the first companies fined for non-compliance with the EU’s Digital Markets Act, which experts say could inflame tensions between US President Donald Trump and the European bloc.

Apple was penalised to the tune of €500 million ($570 million) for violating anti-steering rules and Meta by €200 million ($228 million) for its “consent or pay” ad model, the EU said in a press release.

The fines are a pittance for both firms, whose most recent quarterly earnings statements from January saw Apple report $36.33 billion in net income, and Meta $20.83 billion.

Apple’s penalty related to anti-steering violations – for which it’s already paid a €1.8 billion penalty to the EU – saw it found guilty of not allowing app developers to direct users outside Apple’s own in-app payment system for cheaper alternatives. The European Commission also ordered Apple to “remove the technical and commercial restrictions on steering” while simultaneously closing an investigation into Apple’s user choice obligations, finding that “early and proactive” moves by Cupertino to address compliance shortcomings resolved the issue.

Meta, on the other hand, was fined for the pay-or-consent model whereby it offered a paid, ad-free version of its services as the only alternative to allowing the company to harvest user data. The strategy earned it considerable ire in Europe for exactly the reason the EU began investigating it last year: That it still ingested data even if users paid and that it wasn’t clear about how personal data was being collected or used.

“The Commission found that this model is not compliant with the DMA,” the EC said, because it gave users no choice to opt into a service that used less of their data, nor did it allow users to freely consent to having their data combined.

That fine only applies to the period between March and November 2024 when the consent-or-pay model was active, however. The EU said that a new advertising model introduced in November of last year resolved many of its concerns, which European Privacy advocate Max Schrems says will likely still be an issue.

“Meta has moved to a system with a ‘pay,’ a ‘consent’ and a ‘less ads’ option,” Schrems explained in a statement emailed to The Register. Schrems said the “less ads” option is nothing but a distraction.

“It has massive usability limitations – nothing any user seriously wants,” Schrems said. “Meta has simply created a ‘fake choice’, pretending that it would overcome the illegal ‘pay or okay’ approach.”

Alongside the fines, the EU also said that it was removing Facebook Marketplace’s designation as a DMA gatekeeper, as it had too few commercial users to qualify as “an important gateway for business users to reach end users.”

[… followed by stuff about how Americans don’t like the fines in usual snowflakey Trump style crying tantrums]

Source: EC fines Meta, Apple €700M for DMA compliance failures • The Register