We are losing vast swathes of our digital past, and copyright stops us saving it

It is hard to imagine the world without the Web. Collectively, we routinely access billions of Web pages without thinking about it. But we often take it for granted that the material we want to access will be there, both now and in the future. We all hit the dreaded “404 not found” error from time to time, but merely pass on to other pages. What we tend to ignore is how these online error messages are a flashing warning signal that something bad is happening to the World Wide Web. Just how bad is revealed in a new report from the Pew Research Center, based on an examination of half a million Web pages, which found:

A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.

For older content, this trend is even starker. Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.

This digital decay occurs at slightly different rates for different online material:

23% of news webpages contain at least one broken link, as do 21% of webpages from government sites. News sites with a high level of site traffic and those with less are about equally likely to contain broken links. Local-level government webpages (those belonging to city governments) are especially likely to have broken links.

54% of Wikipedia pages contain at least one link in their “References” section that points to a page that no longer exists.

These figures show that the problem we discussed a few weeks ago – that access to academic knowledge is at risk – is in fact far wider, and applies to just about everything that is online. Although the reasons for material disappearing vary greatly, the key obstacle to addressing that loss is the same across all fields. The copyright industry’s obsessive control of material, and the punitive laws that can be deployed against even the most trivial copyright infringement, mean that routine and multiple backup copies of key or historic online material are rarely made.

The main exception to that rule is the sterling work carried out by the Internet Archive, which was founded by Brewster Kahle, whose Kahle/Austin Foundation supports this blog. At the time of writing the Internet Archive holds copies of an astonishing 866 billion Web pages, many in multiple versions that chart their changes over time. It is a unique and invaluable resource.

It is also being sued by publishers for daring to share in a controlled way some of its holdings. That is, the one bulwark against losing vast swathes of our digital culture is being attacked by an industry that is largely to blame for the problem the Internet Archive is trying to solve. It’s another important reason why we must move away from the copyright system, and nullify the power it has to destroy, rather than create, our culture.

Source: We are losing vast swathes of our digital past, and copyright stops us saving it – Walled Culture

First-mover advantage found in the arts shows copyright isn’t necessary to protect innovative creativity

One of the arguments sometimes made in defence of copyright is that without it, creators would be unable to compete with the hordes of copycats that would spring up as soon as their works became popular. Copyright is needed, supporters say, to prevent less innovative creators from producing works that are closely based on new, successful ideas. However, this approach has led to constant arguments and court cases over how close a “closely based” work can be before it infringes on the copyright of others. A good example of this is the 2022 lawsuit involving Ed Sheeran, where is was argued that using just four notes of a scale constituted copyright infringement of someone else’s song employing the same tiny motif. A fascinating new paper looks at things from a different angle. It draws on the idea of “first-mover advantage”, the fact that:

individuals that move to a new market niche early on (“first movers”) obtain advantages that may lead to larger success, compared to those who move to this niche later. First movers enjoy a temporary near-monopoly: since they enter a niche early, they have little to no competition, and so they can charge larger prices and spend more time building a loyal customer base.

The paper explores the idea in detail for the world of music. Here, first-mover advantage means:

The artists and music producers who recognize the hidden potential of a new artistic technique, genre, or style, have bigger chances of reaching success. Having an artistic innovation that your competitors do not have or cannot quickly acquire may become advantageous on the winner-take-all artistic market.

Analysing nearly 700,000 songs across 110 different musical genres, the researchers found evidence that first-mover advantage was present in 91 of the genres. The authors point out that there is also anecdotal evidence of first-mover advantage in other arts:

For example, Agatha Christie—one of the recognized founders of “classical” detective novel—is also one of the best-selling authors ever. Similarly, William Gibson’s novel Neuromancer—a canonical work in the genre of cyberpunk—is also one of the earliest books in this strand of science fiction. In films, the cult classic The Blair Witch Project is the first recognized member of the highly successful genre of found-footage horror fiction.

Although copyright may be present, first-mover advantage does not require it to operate – it is simply a function of being early with a new idea, which means that competition is scarce or non-existent. If further research confirms the wider presence of first-mover advantage in the creative world – for example, even where sharing-friendly CC licences are used – it will knock down yet another flimsy defence of copyright’s flawed and outdated intellectual monopoly

Source: First-mover advantage in the arts means copyright isn’t necessary to protect innovative creativity – Walled Culture

The world’s first tooth-regrowing drug has been approved for human trials

[…] medicine quite literally regrows teeth and was developed by a team of Japanese researchers, as reported by New Atlas. The research has been led by Katsu Takahashi, head of dentistry and oral surgery at Kitano Hospital. The intravenous drug deactivates the uterine sensitization-associated gene-1 (USAG-1) protein that suppresses tooth growth. Blocking USAG-1 from interacting with other proteins triggers bone growth and, voila, you got yourself some brand-new chompers. Pretty cool, right?

Human trials start in September, but the drug has been highly successful when treating ferrets and mice and did its job with no serious side effects. Of course, the usual caveat applies. Humans are not mice or ferrets, though researchers seem confident that it’ll work on homo sapiens. This is due to a 97 percent similarity in how the USAG-1 protein works when comparing humans to other species.

September’s clinical trial will include adults who are missing at least one molar but there’s a secondary trial coming aimed at children aged two to seven. The kids in the second trial will all be missing at least four teeth due to congenital tooth deficiency. Finally, a third trial will focus on older adults who are missing “one to five permanent teeth due to environmental factors.”

Takahashi and his fellow researchers are so optimistic about this drug that they predict the medicine will be available for everyday consumers by 2030. So in six years we can throw our toothbrushes away and eat candy bars all day and all night without a care in the world (don’t actually do that.)

While this is the first drug that can fully regrow missing teeth, the science behind it builds on top of years of related research. Takahashi, after all, has been working on this since 2005. Recent advancements in the field include regenerative tooth fillings to repair diseased teeth and stem cell technology to regrow the dental tissue of children.

Source: The world’s first tooth-regrowing drug has been approved for human trials

What’s Actually In Tattoo Ink? No One Really Knows

Nearly a third of U.S. adults have tattoos, so plenty of you listeners can probably rattle off the basic guidelines of tattoo safety: Make sure you go to a reputable tattoo artist who uses new, sterile needles. Stay out of the ocean while you’re healing so you don’t pick up a smidgen of flesh-eating bacteria. Gently wash your new ink with soap and water, avoid sun exposure and frequently apply an unscented moisturizer—easy-peasy.

But body art enthusiasts might face potential risks from a source they don’t expect: tattoo inks themselves. Up until relatively recently tattoo inks in the U.S. were totally unregulated. In 2022 the federal government pulled tattoo inks under the regulatory umbrella of cosmetics, which means the Food and Drug Administration can oversee these products. But now researchers are finding that many commercial inks contain ingredients they’re not supposed to. Some of these additives are simply compounds that should be listed on the packaging and aren’t. But others could pose a risk to consumers.

For Science Quickly, I’m Rachel Feltman. I’m joined today by John Swierk, an assistant professor of chemistry at Binghamton University, State University of New York. His team is trying to figure out exactly what goes into each vial of tattoo ink—and how tattoos actually work in the first place—to help make body art safer, longer-lasting and maybe even cooler.

[…]

one of the areas we got really interested in was trying to understand why light causes tattoos to fade. This is a huge question when you think about something with laser tattoo removal, where you’re talking about an industry on the scale of $1 billion a year.

And it turns out we really don’t understand that process. And so starting to look at how the tattoo pigments change when you expose them to light, what that might be doing in the skin, then led us to a lot of other questions about tattoos that we realized weren’t well understood—even something as simple as what’s actually in tattoo ink.

[…]

recently we’ve been looking at commercial tattoo inks and sort of surprised to find that in the overwhelming majority of them, we’re seeing things that are not listed as part of the ingredients….Now that doesn’t necessarily mean the things that are in these inks are unsafe, but it does cause a huge problem if you want to try to understand something about the safety of tattoos.

[…]

I think most people would agree that it would be great to know that tattoo inks are safe [and] being made safely, you know? And of course, that’s not unique to tattoo inks; cosmetics and supplements have a lot of similar problems that we need to work on.

But, if we’re going to get a better grasp on the chemistry and even the immunology of tattoos, that’s not just going to help us make them safer but, you know, potentially improve healing, appearance, longevity.

I mean, I think about that start-up that promised “ephemeral tattoos” that now folks a few years later are coming out and saying, “These tattoos have not gone away,” and thinking about how much potential there is for genuine innovation if we can start to answer some of these questions.

[…]

we can start to think about designing new pigments that might have better colorfastness, less reactivity, less sort of bleeding of the lines, right, over time. But all of those things can only happen if we actually understand tattoos, and we really just don’t understand them that well at the moment.

[…]

We looked at 54 inks, and of the 54, 45 had what we consider to be major discrepancies—so these were either unlisted pigments, unlisted additives.

And that was really concerning to us, right? You’re talking about inks coming from major, global, industry-leading manufacturers all the way down to smaller, more niche inks—that there were problems across the board.

So we found things like incorrect pigments being listed. We found issues of some major allergens being used—these aren’t necessarily compounds that are specifically toxic, but to some people they can generate a really pronounced allergic response.

And a couple of things: we found an antibiotic that’s most commonly used for urinary tract infections.

We found a preservative that the FDA has cautioned nursing mothers against, you know, having exposure to—so things that at a minimum, need to be disclosed so that consumers could make informed choices.

[…]

if somebody’s thinking about getting a tattoo, they should be working with an artist who is experienced, who has apprenticed under experienced artists, who is really following best practices in terms of sanitation, aftercare, things like that. That’s where we know you can have a problem. Beyond that, I think it’s a matter of how comfortable you are with some degree of risk.

The point I always really want to emphasize is that, you know, our work isn’t saying anything about whether tattoos are safe or not.

It’s the first step in that process. Just because we found some stuff in the inks doesn’t mean that you shouldn’t get a tattoo or that you have a new risk for skin cancer or something like that…. it’s that this is the process of how science grows, right—that we have to start understanding the basics and the fundamentals so that we can build the next questions on top of that.

And our understanding of tattoos in the body is still at such an early level that we don’t really even understand what the risk factors would be, “What should we be looking for?”

So I think it’s like with anything in life: if you’re comfortable with a degree of risk, then, yeah, go ahead and get the tattoo. People get tattoos for lots of reasons that are important and meaningful and very impactful in a positive way in their life. And I think a concern over a hypothetical risk is probably not worth the potential positives of getting a tattoo.

We know that light exposure— particularly the sunlight—is not great for the tattoo, and if we have concerns about long-term pigment breakdown, ultraviolet light is probably going to enhance that, so keeping your tattoo covered, using sunscreen when you can’t keep it covered—that’s probably very important. If you’re really concerned about the risk, we can think about the size of the tattoo. So somebody with a relatively small piece of line art on their back is in a very different potential risk category than somebody who is fully sleeved and, you know, covered from, say, neck to ankle in tattoos.

And again we’re not saying that either those people have a significant risk that they need to be worried about, but if somebody is concerned, the person with the small line art on the back is much less likely to have to worry about the risk than somebody with a huge tattoo.

We also know that certain colors, like yellow in particular, fade much more readily. That suggests that those pigments are interacting with the body a lot more.

Staying away from bright colors and focusing on black inks might be a more prudent option there, but again, right, a lot of these are hypothetical and we don’t want to alarm people or scare them.

[…]

We’re also still working on understanding what tattoo pigments break down into.

We really don’t understand a lot about laser tattoo removal, and if there is some aspect of tattooing that gives me pause, it’s probably that part. It’s a very reasonable concern, I think, that you may have pigments that are entirely safe in the skin, but once you start zapping them with high-powered lasers, we don’t know what you do to the chemistry, and so that could change the dynamic a lot. And so we’re trying to figure out how to do that and, I think, making some progress there. And then the last area—which is, is new to us but kind of fun—is actually just looking at the biomechanics of tattooing. You would think that we’d really understand how the ink goes into the skin, how it stays in the skin, but the picture there is a little bit hazy

[…]

One of the interesting things, when you talk to ink manufacturers and artists, is that they sort of have this intuitive feel for … sort of what the viscosity of the ink should be like and how much pigment is in there but can’t necessarily articulate why a particular viscosity is good or why a particular pigment loading is good. And so we think if we understand something about the process by which the ink goes…and so we think understanding the biomechanics could really open some interesting possibilities and lead to better, more interesting tattoos down the road as well.

[…]

Source: What’s Actually In Tattoo Ink? No One Really Knows | Scientific American

Over 165 Snowflake customers didn’t use MFA, says Mandiant

An unknown financially motivated crime crew has swiped a “significant volume of records” from Snowflake customers’ databases using stolen credentials, according to Mandiant.

“To date, Mandiant and Snowflake have notified approximately 165 potentially exposed organizations,” the Google-owned threat hunters wrote on Monday, and noted they track the perps as “UNC5537.”

The crew behind the Snowflake intrusions may have ties to Scattered Spider, aka UNC3944 – the notorious gang behind the mid-2023 Las Vegas casino breaches.

“Mandiant is investigating the possibility that a member of UNC5537 collaborated with UNC3944 on at least one past intrusion in the past six months, but we don’t have enough data to confidently link UNC5537 to a broader group at this time,” senior threat analyst Austin Larsen told The Register.

Mandiant – one of the incident response firms hired by Snowflake to help investigate its recent security incident – also noted that there’s no evidence a breach of Snowflake’s own enterprise environment was to blame for its customers’ breaches.

“Instead, every incident Mandiant responded to associated with this campaign was traced back to compromised customer credentials,” the Google-owned threat hunters confirmed.

The earliest detected attack against a Snowflake customer instance happened on April 14. Upon investigating that breach, Mandiant says it determined that UNC5537 used legitimate credentials – previously stolen using infostealer malware – to break into the victim’s Snowflake environment and exfiltrate data. The victim did not have multi-factor authentication turned on.

About a month later, after uncovering “multiple” Snowflake customer compromises, Mandiant contacted the cloud biz and the two began notifying affected organizations. By May 24 the criminals had begun selling the stolen data online, and on May 30 Snowflake issued its statement about the incidents.

After gaining initial access – which we’re told occurred through the Snowflake native web-based user interface or a command-line-interface running on Windows Server 2002 – the criminals used a horribly named utility, “rapeflake,” which Mandiant has instead chosen to track as “FROSTBITE.”

UNC5537 has used both .NET and Java versions of this tool to perform reconnaissance against targeted Snowflake customers, allowing the gang to identify users, their roles, and IP addresses.

The crew also sometimes uses DBeaver Ultimate – a publicly available database management utility – to query Snowflake instances.

Several of the initial compromises occurred on contractor systems that were being used for both work and personal activities.

“These devices, often used to access the systems of multiple organizations, present a significant risk,” Mandiant researchers wrote. “If compromised by infostealer malware, a single contractor’s laptop can facilitate threat actor access across multiple organizations, often with IT and administrator-level privileges.”

All of the successful intrusions had three things in common, according to Mandiant. First, the victims didn’t use MFA.

Second, the attackers used valid credentials, “hundreds” of which were stolen thanks to infostealer infections – some as far back as 2020. Common variants used included VIDAR, RISEPRO, REDLINE, RACOON STEALER, LUMMA and METASTEALER. But even in these years-old thefts, the credentials had not been updated or rotated.

Almost 80 percent of the customer accounts accessed by UNC5537 had prior credential exposure, we’re told.

Finally, the compromised accounts did not have network allow-lists in place. So if you are a Snowflake customer, it’s time to get a little smarter.

Source: Over 165 Snowflake customers didn’t use MFA, says Mandiant • The Register

Oddly enough, they don’t mention the Ticketmaster 560m+ account hack confirmed in what seems to be a spree hitting Snowflake customers considering the size of the hack! Also, oddly enough, when you Google Snowflake, you get the corporate page, some wikipedia entries, but not very much about the hack. Considering the size and breadth of the problem, this is surprising. But perhaps not, considering it’s a part of Google.

Finnish startup Flow claims it can 100x any CPU’s power with its companion chip

A Finnish startup called Flow Computing is making one of the wildest claims ever heard in silicon engineering: by adding its proprietary companion chip, any CPU can instantly double its performance, increasing to as much as 100x with software tweaks.

If it works, it could help the industry keep up with the insatiable compute demand of AI makers.

Flow is a spinout of VTT, a Finland state-backed research organization that’s a bit like a national lab. The chip technology it’s commercializing, which it has branded the Parallel Processing Unit, is the result of research performed at that lab (though VTT is an investor, the IP is owned by Flow).

The claim, Flow is first to admit, is laughable on its face. You can’t just magically squeeze extra performance out of CPUs across architectures and code bases. If so, Intel or AMD or whoever would have done it years ago.

But Flow has been working on something that has been theoretically possible — it’s just that no one has been able to pull it off.

Central Processing Units have come a long way since the early days of vacuum tubes and punch cards, but in some fundamental ways they’re still the same. Their primary limitation is that as serial rather than parallel processors, they can only do one thing at a time. Of course, they switch that thing a billion times a second across multiple cores and pathways — but these are all ways of accommodating the single-lane nature of the CPU. (A GPU, in contrast, does many related calculations at once but is specialized in certain operations.)

“The CPU is the weakest link in computing,” said Flow co-founder and CEO Timo Valtonen. “It’s not up to its task, and this will need to change.”

CPUs have gotten very fast, but even with nanosecond-level responsiveness, there’s a tremendous amount of waste in how instructions are carried out simply because of the basic limitation that one task needs to finish before the next one starts. (I’m simplifying here, not being a chip engineer myself.)

What Flow claims to have done is remove this limitation, turning the CPU from a one-lane street into a multi-lane highway. The CPU is still limited to doing one task at a time, but Flow’s PPU, as they call it, essentially performs nanosecond-scale traffic management on-die to move tasks into and out of the processor faster than has previously been possible.

[…]

This type of thing isn’t brand new, says Valtonen. “This has been studied and discussed in high-level academia. You can already do parallelization, but it breaks legacy code, and then it’s useless.”

So it could be done. It just couldn’t be done without rewriting all the code in the world from the ground up, which kind of makes it a non-starter. A similar problem was solved by another Nordic compute company, ZeroPoint, which achieved high levels of memory compression while keeping data transparency with the rest of the system.

Flow’s big achievement, in other words, isn’t high-speed traffic management, but rather doing it without having to modify any code on any CPU or architecture that it has tested.

[…]

Therein lies the primary challenge to Flow’s success as a business: Unlike a software product, Flow’s tech needs to be included at the chip-design level, meaning it doesn’t work retroactively, and the first chip with a PPU would necessarily be quite a ways down the road. Flow has shown that the tech works in FPGA-based test setups, but chipmakers would have to commit quite a lot of resources to see the gains in question.

[…]

Further performance gains come from refactoring and recompiling software to work better with the PPU-CPU combo. Flow says it has seen increases up to 100x with code that’s been modified (though not necessarily fully rewritten) to take advantage of its technology. The company is working on offering recompilation tools to make this task simpler for software makers who want to optimize for Flow-enabled chips.

Analyst Kevin Krewell from Tirias Research, who was briefed on Flow’s tech and referred to as an outside perspective on these matters, was more worried about industry uptake than the fundamentals.

[…]

Flow is just now emerging from stealth, with €4 million (about $4.3 million) in pre-seed funding led by Butterfly Ventures, with participation from FOV Ventures, Sarsia, Stephen Industries, Superhero Capital and Business Finland.

Source: Flow claims it can 100x any CPU’s power with its companion chip and some elbow grease | TechCrunch

This sounds a bit like the co-processors you used to be able to install in the 70s/80s/early 90s

China state hackers infected 20,000 govt and defence Fortinet VPNs, due to at least 2 month unfixed critical vulnerability

Hackers working for the Chinese government gained access to more than 20,000 VPN appliances sold by Fortinet using a critical vulnerability that the company failed to disclose for two weeks after fixing it, Netherlands government officials said.

The vulnerability, tracked as CVE-2022-42475, is a heap-based buffer overflow that allows hackers to remotely execute malicious code. It carries a severity rating of 9.8 out of 10. A maker of network security software, Fortinet silently fixed the vulnerability on November 28, 2022, but failed to mention the threat until December 12 of that year, when the company said it became aware of an “instance where this vulnerability was exploited in the wild.” On January 11, 2023—more than six weeks after the vulnerability was fixed—Fortinet warned a threat actor was exploiting it to infect government and government-related organizations with advanced custom-made malware.

Enter CoatHanger

The Netherlands officials first reported in February that Chinese state hackers had exploited CVE-2022-42475 to install an advanced and stealthy backdoor tracked as CoatHanger on Fortigate appliances inside the Dutch Ministry of Defense. Once installed, the never-before-seen malware, specifically designed for the underlying FortiOS operating system, was able to permanently reside on devices even when rebooted or receiving a firmware update. CoatHanger could also escape traditional detection measures, the officials warned. The damage resulting from the breach was limited, however, because infections were contained inside a segment reserved for non-classified uses.

On Monday, officials with the Military Intelligence and Security Service (MIVD) and the General Intelligence and Security Service in the Netherlands said that to date, Chinese state hackers have used the critical vulnerability to infect more than 20,000 FortiGate VPN appliances sold by Fortinet. Targets include dozens of Western government agencies, international organizations, and companies within the defense industry.

“Since then, the MIVD has conducted further investigation and has shown that the Chinese cyber espionage campaign appears to be much more extensive than previously known,” Netherlands officials with the National Cyber Security Center wrote. “The NCSC therefore calls for extra attention to this campaign and the abuse of vulnerabilities in edge devices.”

Monday’s report said that exploitation of the vulnerability started two months before Fortinet first disclosed it and that 14,000 servers were backdoored during this zero-day period. The officials warned that the Chinese threat group likely still has access to many victims because CoatHanger is so hard to detect and remove.

[…]

Fortinet’s failure to timely disclose is particularly acute given the severity of the vulnerability. Disclosures are crucial because they help users prioritize the installation of patches. When a new version fixes minor bugs, many organizations often wait to install it. When it fixes a vulnerability with a 9.8 severity rating, they’re much more likely to expedite the update process. Given the vulnerability was being exploited even before Fortinet fixed it, the disclosure likely wouldn’t have prevented all of the infections, but it stands to reason it could have stopped some.

Fortinet officials have never explained why they didn’t disclose the critical vulnerability when it was fixed. They have also declined to disclose what the company policy is for the disclosure of security vulnerabilities. Company representatives didn’t immediately respond to an email seeking comment for this post.

Source: China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says | Ars Technica