Volkswagen brings back physical buttons for all new cars

Future Volkswagen interiors will all draw inspiration from the ID 2all concept car and bring back physical buttons and controls.

The touchscreen-heavy approach taken for the Mk8 Golf and ID 3 has proven unpopular with customers, prompting a complete about-turn by the company in the way it approaches design.

VW interior designer Darius Watola said the ID2all concept “showed a new approach for all models” and was in response to “recent feedback from customers”.

The new interior has a row of physical (and backlit) buttons for the climate and a rotary controller on the centre tunnel to control the screen on the dashboard above, much like with BMW’s iDrive.

As well as a main central touchscreen for infotainment, there’s also a screen for driving information. Watola said such a display in the driver’s eyeline is crucial for safety.

He said that “customers had a different view in Europe” than in other global markets and wanted “more physical buttons”.

There’s also a revolution in terms of material use, as VW is looking to phase out hard plastics, glue, leather and chrome.

Almost every surface in the ID 2all is soft to the touch, mixing fabrics and Alcantara as part of a sustainability push. There’s limited use of some woods and metals, too.

Watola expressed a desire to see as many features and materials as possible from the concept to the production car in 2025 (which now seems unlikely to take the ID 2 name into showrooms).

However, the goal remains a sub-€25,000 (£22,000) price, which might limit some of the more premium-feeling materials in the cabin.

The concept’s screens can be selected in different themes, including retro graphics from the original Golf, and this feature is expected to make production.

Source: Volkswagen brings back physical buttons for all new cars | Autocar

Very glad that people are starting to realise that touchscreens are not only unsafe but also unhandy, slow and annoying

Internet Archive: Digital Lending is Fair Use, Not Copyright Infringement – a library is a library, whether it’s paper or digital

In 2020, publishers Hachette, HarperCollins, John Wiley and Penguin Random House sued the Internet Archive (IA) for copyright infringement, equating its ‘Open Library’ to a pirate site.

IA’s library is a non-profit operation that scans physical books, which can then be lent out to patrons in an ebook format. Patrons can also borrow books that are scanned and digitized in-house, with technical restrictions that prevent copying.

Staying true to the centuries-old library concept, only one patron at a time can rent a digital copy of a physical book for a limited period.

Mass Copyright Infringement or Fair Use?

Not all rightsholders are happy with IA’s scanning and lending activities. The publishers are not against libraries per se, nor do they object to ebook lending, but ‘authorized’ libraries typically obtain an official license or negotiate specific terms. The Internet Archive has no license.

The publishers see IA’s library as a rogue operation that engages in willful mass copyright infringement, directly damaging their bottom line. As such, they want it taken down permanently.

The Internet Archive wholeheartedly disagreed with the copyright infringement allegations; it offers a vital service to the public, the Archive said, as it built its legal defense on protected fair use.

After weighing the arguments from both sides, New York District Court Judge John Koeltl sided with the publishers. In March, the court granted their motion for summary judgment, which effectively means that the library is indeed liable for copyright infringement.

The judgment and associated permanent injunction effectively barred the library from reproducing or distributing digital copies of the ‘covered books’ without permission from rightsholders. These restrictions were subject to an eventual appeal, which was announced shortly thereafter.

Internet Archive Files Appeal Brief

Late last week, IA filed its opening brief at the Second Circuit Court of Appeals, asking it to reverse the lower court’s judgment. The library argues that the court erred by rejecting its fair use defense.

Whether IA has a fair use defense depends on how the four relevant factors are weighed. According to the lower court, these favor the publishers but the library vehemently disagrees. On the contrary, it believes that its service promotes the creation and sharing of knowledge, which is a core purpose of copyright.

“This Court should reverse and hold that IA’s controlled digital lending is fair use. This practice, like traditional library lending, furthers copyright’s goal of promoting public availability of knowledge without harming authors or publishers,” the brief reads.

A fair use analysis has to weigh the interests of both sides. The lower court did so, but IA argues that it reached the wrong conclusions, failing to properly account for the “tremendous public benefits” controlled digital lending offers.

No Competition

One of the key fair use factors at stake is whether IA’s lending program affects (i.e., threatens) the traditional ebook lending market. IA uses expert witnesses to argue that there’s no financial harm and further argues that its service is substantially different from the ebook licensing market.

IA offers access to digital copies of books, which is similar to licensed libraries. However, the non-profit organization argues that its lending program is not a substitute as it offers a fundamentally different service.

“For example, libraries cannot use ebook licenses to build permanent collections. But they can use licensing to easily change the selection of ebooks they offer to adapt to changing interests,” IA writes.

The licensing models make these libraries more flexible. However, they have to rely on the books offered by commercial aggregators and can’t add these digital copies to their archives.

“Controlled digital lending, by contrast, allows libraries to lend only books from their own permanent collections. They can preserve and lend older editions, maintaining an accurate historical record of books as they were printed.

“They can also provide access that does not depend on what Publishers choose to make available. But libraries must own a copy of each book they lend, so they cannot easily swap one book for another when interest or trends change,” IA adds.

Stakes are High

The arguments highlighted here are just a fraction of the 74-page opening brief, which goes into much more detail and ultimately concludes that the district court’s judgment should be reversed.

In a recent blog post, IA founder Brewster Kahle writes that if the lower court’s verdict stands, books can’t be preserved for future generations in digital form, in the same way that paper versions have been archived for centuries.

“This lawsuit is about more than the Internet Archive; it is about the role of all libraries in our digital age. This lawsuit is an attack on a well-established practice used by hundreds of libraries to provide public access to their collections.

“The disastrous lower court decision in this case holds implications far beyond our organization, shaping the future of all libraries in the United States and unfortunately, around the world,” Kahle concludes.

A copy of the Internet Archive’s opening brief, filed at the Second Circuit Court of Appeals, is available here (pdf)

Source: Internet Archive: Digital Lending is Fair Use, Not Copyright Infringement * TorrentFreak

Google to pay $700 million and make tiny app store changes to settle with 50 states

On December 11th, a jury decided that Google has an illegal monopoly with its Google Play app store, handing Epic Games a win. But Epic wasn’t the only one fighting an antitrust case. All 50 state attorneys general settled a similar lawsuit in September, and we’ve just now learned what Google agreed to give up as a result: $700 million and a handful of minor concessions in the way that Google runs its store in the United States.

The biggest change: Google will need to let developers steer consumers away from the Google Play Store for several years, if this settlement is approved.

You can read the full 68-page settlement for yourself at the bottom of this story, but here’s the TL;DR about what it includes:

  • $700,000,000 from Google in total (roughly 21 days of Google’s operating profit from the app store alone)
  • $629,000,000 of which will go to consumers who may have overpaid for apps or in-app purchases via Google Play after taxes, lawyers’ fees, and so on
  • $70,000,000 of which will go to states to be used as the state AGs see fit
  • $1,000,000 of which is for settlement administration
  • For 7 years, Google will “continue to technically enable Android to allow the installation of third-party apps on Mobile Devices through means other than Google Play”
  • For 5 years, Google will let developers offer an alternative in-app billing system next to Google Play (aka “User Choice Billing”)
  • For 5 years, Google won’t make developers offer their best prices to customers who pick Google Play and Google Play Billing
  • For 4 years, Google won’t make developers ship titles on Google Play at the same time as other stores and with feature parity
  • For 5 years, Google won’t make companies exclusively put Google Play on a phone or its homescreen
  • For 4 years, Google won’t stop OEMs from granting installer rights to preloaded apps
  • For 5 years, Google won’t require its “consent” before an OEM preloads a third-party app store
  • For 4 years, Google will let third-party app stores update apps without requiring user approval
  • For 4 years, Google will let sideloaded app stores use its APIs and “feature splits” to help install apps
  • For 5 years, Google will turn its two sideloading “scare screens” into a single user prompt which will read the equivalent of this agreed-upon language: “Your phone currently isn’t configured to install apps from this source. Granting this source permission to install apps could place your phone and data at risk.”
  • For 5 years, Google will let User Choice Billing participating developers let their users know about better pricing elsewhere and “complete transactions using the developer’s existing web-based billing solution in an embedded webview within its app.”
  • For 6 years, Google will “continue to allow developers to use contact information obtained outside the app or in-app (with User consent) to communicate with Users out-of-app”
  • For 6 years, Google will let consumption only apps (e.g. Netflix, which doesn’t let you pay on device) tell users about better prices elsewhere, without linking to an outside website — example: “Available on our website for $9.99”
  • For 6 years, Google “shall not prohibit developers from disclosing to Users any service or other fees associated with the Google Play or Google Play’s billing system.”

Does that sound like a lot? If you add it all up, it does make for a slightly different Google app store landscape than we’ve experienced over the past decade and change. But not only does every one of these concessions have an expiration date, many of them are arguably not real concessions.

Google argued during the Epic v. Google trial that users were already perfectly able to install third-party apps on their devices through any number of means, and it claimed many of its agreements with developers, OEMs, and carriers did not require them to, for instance, exclusively put Google Play on a phone or its homescreen.

More importantly, several of the most significant sounding changes here are tied to Google’s User Choice Billing program — which is mostly a fake choice, the Epic v. Google trial proved.

We confirmed with Google spokesperson Dan Jackson this evening that User Choice Billing participants are given a discounted rate of just 4 percent off of Google’s fee when users choose their own payment system, and that it won’t change as a result of the settlement. Not only did Google internally find that developers would lose money when users choose the 4 percent rate, but Google also gives companies like Spotify a free ride while apparently charging everyone else.

Perhaps most importantly, Google is reserving the right not to let developers like Netflix link to their own websites to give their users a discounted rate. “Google is not required to allow developers to include links that take a User outside an app distributed through Google Play to make a purchase,” the settlement agreement reads. We are still waiting to find out whether Apple will allow links and/or buttons to alternative payment systems, based on the ruling in Epic v. Apple. But the Google / state AGs settlement suggests that regardless, Google will not be required to allow links.

[…]

Source: Google to pay $700 million and make tiny app store changes to settle with 50 states – The Verge

It’s still baffling that Google lost this case and Apple won it on almost exactly the same grounds, where in Google’s case you can actually sideload apps “legally” (if in an obtuse manner which makes you think you are doing something wrong) and in Apple’s you can’t.

Lamborghini Tests Active Camber and Toe Control for Better Handling

It’s not often that we get to experience a new and completely novel piece of automotive technology for the first time. But that’s what Lamborghini seems to have created with its Active Wheel Carrier, which we have now sampled in prototype form. The system itself is both clever and complex, but the basic purpose is simple: to give real-time control of camber and toe alignment settings while a car is moving.

According to Rouven Mohr, Lamborghini’s chief technical officer, this is one of the final frontiers of vehicle dynamics. Suspension geometry is usually based around a set of compromises, with the loads created by a car in motion inevitably negatively affecting at least some of these. And the alignment settings that are right for the track will cause premature tire wear on the street, which is why many high-performance cars have track-alignment settings and necessitate switching back and forth. Gaining active control in two different planes—toe being the angle of the rotating wheel relative to the direction of travel, and camber its side-on angle relative to the ground—means that many of these compromises can be eliminated. The results, based on our drive in a Lamborghini Huracán development mule at Porsche’s Nardò test track in Italy, are deeply impressive.

The idea itself is not new, and Mohr admits that work on it was being done at fellow VW sibling Audi when he previously worked there. But as well as the hardware required to move the wheel in two planes, the challenge is creating a control system capable of doing so quickly and accurately enough to allow the benefits to be exploited. This is an area in which Lamborghini is leading the way.

The system works exclusively on each of the Huracán prototype’s rear wheels. Active toe control is, in essence, a rear-steering system. We’ve had those before, of course—but this one can also move the wheels between toe-in, where the leading edges point very slightly toward each other, and toe-out, where they do the opposite. In very general terms, toe-out makes a car more reactive and keener to turn, while toe-in gives better high-speed stability.

Active camber control is more revolutionary. Under cornering loads, a car leans over and the suspension compresses, which alters the relationship between the tire tread and the road surface. On something as low and firmly suspended as a Lamborghini supercar, the effect is much slighter than it would be on a 1970s sedan, but it is still significant, as it creates uneven pressure distribution on the tire’s contact patch, which reduces grip. Many performance cars are set up with negative camber (the tire leaned in on its inside edge) to compensate for this, but doing so reduces straight-line traction and increases tire wear.

[…]

two rotating flanges within are what alter the relative angle between the two sides, one controlling camber and the other toe. These are gear-driven by 48-volt electric motors.

[…]

The Active Wheel Carrier can deliver up to 6.6 degrees of toe adjustment in either direction and between 2.5 degrees of positive and 5.5 degrees of negative camber. Both planes can be adjusted at the same time, and the electric motors can do this at up to 60 degrees a second. So even the most extreme change possible—from full toe-in to full toe-out—could be accomplished in under a quarter of a second, although most changes will be much smaller adjustments.

[…]

Starting with the system switched off, and the Evo’s rear suspension in its default position, reveals both understeer on cold tires when driven aggressively plus a rapid transition to oversteer when the rear grip is exceeded. With the Active Wheel Carrier switched on, the Huracán immediately feels more grippy and reactive, keener to change direction—much of which is due to the rear-steering effect of toe adjustment—but also much more stable when being pushed to the edge of adhesion.

[…]

On the handling track, our fastest lap with AWC on was 4.8 seconds faster than with the system off, and while that effect is reduced for more experienced drivers on more familiar tracks, it’s still significant. Even a Lambo pro driver is reportedly 2.8 seconds quicker at Nardò with AWC. That’s on par with the gain by switching from sport tires to street-legal semi-slicks.

The technology would also enable other changes: wider front tires relative to the rears, slightly softer springs to allow more roll (active camber being able to adjust to this), and the intriguing possibly of running different tire compounds front and rear to make maximum benefit from the improved grip. Motors powering the units would also likely be upgraded to work on 400 volts, supplied directly from the plug-in-hybrid battery pack.

While AWC is officially only an experiment at this stage, it seems overwhelmingly likely to play a part in Lamborghini’s future—most likely the Huracán replacement that will debut next year.

Source: Lamborghini Tests Active Camber and Toe Control for Better Handling

Magic: The Gathering Bans the Use of Generative AI in ‘Final’ Products – Wizards of the Coast cancelled themselves

[…] a D&D artist confirmed they had used generative AI programs to finish several pieces of art included in the sourcebook Glory of the Giants—saw Wizards of the Coast publicly ban the use of AI tools in the process of creating art for the venerable TTRPG. Now, the publisher is making that clearer for its other wildly successful game in Magic: The Gathering.

Update 12/19 11.20PM ET: This post has been updated to include clarification from Wizards of the Coast regarding the extent of guidelines for creatives working with Magic and D&D and the use of Generative A.I.

“For 30 years, Magic: The Gathering has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn’t changing,” a new statement shared by Wizards of the Coast on Daily MTG begins. “Our internal guidelines remain the same with regard to artificial intelligence tools: We require artists, writers, and creatives contributing to the Magic TCG to refrain from using AI generative tools to create final Magic products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes Magic great.”

[…]

The Magic statement also comes in the wake of major layoffs at Wizard’s parent company Hasbro. Last week the Wall Street Journal reported that Hasbro plans to lay off 1,100 staff over the next six months across its divisions in a series of cost-cutting measures, with many creatives across Wizard’s D&D and Magic teams confirming they were part of the layoffs. Just this week, the company faced backlash for opening a position for a Digital Artist at Wizards of the Coast in the wake of the job cuts, which totaled roughly a fifth of the Hasbro’s current workforce across all of its divisions.

The job description specifically highlights that the role includes having to “refine and modify illustrative artwork for print and digital media through retouching, color correction, adjusting ink density, re-sizing, cropping, generating clipping paths, and hand-brushing spot plate masks,” as well as “use… digital retouching wizardry to extend cropped characters and adjust visual elements due to legal and art direction requirements,” which critics suggested carried the implication that the role would involve iterating on and polishing art created through generative AI. Whether or not this will be the case considering Wizards’ now-publicized stance remains to be seen.

Source: Magic: The Gathering Formally Bans the Use of Generative AI in ‘Final’ Products

The Gawker company is very anti AI and keeps mentioning backlash. It’s quite funny that if you look at the supposed “backlash” – they are mostly about the lack of quality control around said art – in as much as people thought the points raised were valid at all (source: twitter page with original disclosure). It’s a kind of cancel culture cave-in, where a minority gets to play the role of judge, jury and executioner and the person being cancelled actually… listens the the canceller with no actual evidence of their crime being presented or weighed independently.

Internet Archive Files Opening Brief In Its Appeal Of Book Publishers’ wanton destruction of it

A few weeks ago, publishing giant Penguin Random House (and, yes, I’m still confused why they didn’t call it Random Penguin House after the merger) announced that it was filing a lawsuit (along with many others) against the state of Iowa for its attempt to ban books in school libraries. In its announcement, Penguin Random House talked up the horrors of trying to limit access to books in schools and libraries:

The First Amendment guarantees the right to read and to be read, and for ideas and viewpoints to be exchanged without unreasonable government interference. By limiting students’ access to books, Iowa violates this core principle of the Constitution.

“Our mission of connecting authors and their stories to readers around the world contributes to the free flow of ideas and perspectives that is a hallmark of American Democracy—and we will always stand by it,” says Nihar Malaviya, CEO, Penguin Random House. “We know that not every book we publish will be for every reader, but we must protect the right for all Americans, including students, parents, caregivers, teachers, and librarians to have equitable access to books, and to continue to decide what they read.” 

That’s a very nice sentiment, and I’m glad that Penguin Random House is stating it, but it rings a little hollow, given that Penguin Random House is among the big publishers suing to shut down the Internet Archive, a huge and incredibly useful digital library that actually has the mission that Penguin Random House’s Nihar Malaviya claims is theirs: connecting authors and their stories to readers around the world, while contributing to the free flow of ideas and perspectives that are important to the world. And, believing in the importance of equitable access to books.

So, then, why is Penguin Random House trying to kill the Internet Archive?

While we knew this was coming, last week, the Internet Archive filed its opening brief before the 2nd Circuit appeals court to try to overturn the tragically terrible district court ruling by Judge John Koeltl. The filing is worth reading:

Publishers claim this public service is actually copyright infringement. They ask this Court to elevate form over substance by drawing an artificial line between physical lending and controlled digital lending. But the two are substantively the same, and both serve copyright’s purposes. Traditionally, libraries own print books and can lend each copy to one person at a time, enabling many people to read the same book in succession. Through interlibrary loans, libraries also share books with other libraries’ patrons. Everyone agrees these practices are not copyright infringement.

Controlled digital lending applies the same principles, while creating new means to support education, research, and cultural participation. Under this approach, a library that owns a print book can scan it and lend the digital copy instead of the physical one. Crucially, a library can loan at any one time only the number of print copies it owns, using technological safeguards to prevent copying, restrict access, and limit the length of loan periods.

Lending within these limits aligns digital lending with traditional library lending and fundamentally distinguishes it from simply scanning books and uploading them for anyone to read or redistribute at will. Controlled digital lending serves libraries’ mission of supporting research and education by preserving and enabling access to a digital record of books precisely as they exist in print. And it serves the public by enabling better and more efficient access to library books, e.g., for rural residents with distant libraries, for elderly people and others with mobility or transportation limitations, and for people with disabilities that make holding or reading print books difficult. At the same time, because controlled digital lending is limited by the same principles inherent in traditional lending, its impact on authors and publishers is no different from what they have experienced for as long as libraries have existed.

The filing makes the case that the Internet Archives use of controlled digital lending for eBooks is protected by fair use, leaning heavily on the idea that there is no evidence of harm to the copyright holders:

First, the purpose and character of the use favor fair use because IA’s controlled digital lending is noncommercial, transformative, and justified by copyright’s purposes. IA is a nonprofit charity that offers digital library services for free. Controlled digital lending is transformative because it expands the utility of books by allowing libraries to lend copies they own more efficiently and borrowers to use books in new ways. There is no dispute that libraries can lend the print copy of a book by mail to one person at a time. Controlled digital lending enables libraries to do the same thing via the Internet—still one person at a time. And even if this use were not transformative, it would still be favored under the first factor because it furthers copyright’s ultimate purpose of promoting public access to knowledge—a purpose libraries have served for centuries.

Second, the nature of the copyrighted works is neutral because the works are a mix of fiction and non-fiction and all are published.

Third, the amount of work copied is also neutral because copying the entire book is necessary: borrowing a book from a library requires access to all of it.

Fourth, IA’s lending does not harm Publishers’ markets. Controlled digital lending is not a substitute for Publishers’ ebook licenses because it offers a fundamentally different service. It enables libraries to efficiently lend books they own, while ebook licenses allow libraries to provide readers temporary access through commercial aggregators to whatever selection of books Publishers choose to make available, whether the library owns a copy or not. Two experts analyzed the available data and concluded that IA’s lending does not harm Publishers’ sales or ebook licensing. Publishers’ expert offered no contrary empirical evidence.

Weighing the fair use factors in light of copyright’s purposes, the use here is fair. In concluding otherwise, the district court misunderstood controlled digital lending, conflating it with posting an ebook online for anyone to access at any time. The court failed to grasp the key feature of controlled digital lending: the digital copy is available only to the one person entitled to borrow it at a time, just like lending a print book. This error tainted the district court’s analysis of all the factors, particularly the first and fourth. The court compounded that error by failing to weigh the factors in light of the purposes of copyright.

Not surprisingly, I agree with the Internet Archives’ arguments here, but these kinds of cases are always a challenge. Judges have this weird view of copyright law, that they sometimes ignore the actual law, the purpose of the law, and the constitutional underpinnings of the law, and insist that the purpose of copyright law is to award the copyright holders as much money and control as possible.

That’s not how copyright is supposed to work, but judges sometimes seem to forget that. Hopefully, the 2nd Circuit does not. The 2nd Circuit, historically, has been pretty good on fair use issues, so hopefully that holds in this case as well.

The full brief is (not surprisingly) quite well done and detailed and worth reading.

And now we’ll get to see whether or not Penguin Random House really supports “the free flow of ideas” or not…

Source: Internet Archive Files Opening Brief In Its Appeal Of Book Publishers’ Win | Techdirt

People discussing Assisted Dying (Euthanasia) in the UK – apparently it’s still illegal there

Dame Esther Rantzen says a free vote on assisted dying would be top of the agenda if she were PM for a day.

“I think it’s important that the law catches up with what the country wants,” the veteran broadcaster told Radio 4’s Today podcast.

Earlier this year, the 83-year-old announced she had been diagnosed with stage four lung cancer.

Dame Esther told the BBC she is currently undergoing a “miracle” treatment to combat the disease.

However, if her next scan shows the medication is not working “I might buzz off to Zurich”, where assisted dying is legal and she has joined the Dignitas clinic, she said.

She said this decision could be driven in part by her wish that her family’s “last memories of me” are not “painful because if you watch someone you love having a bad death, that memory obliterates all the happy times”.

Source: Dame Esther Rantzen: ‘If I were PM, we would vote on assisted dying’ – BBC News

What civilised country doesn’t allow euthanasia? It’s like a 1970s country where being gay is still illegal. Climb up out of your Brexit inflicted stone age, Britain!

Research team discovers how to sabotage antibiotic-resistant ‘superbugs’

The typical strategy when treating microbial infections is to blast the pathogen with an , which works by getting inside the harmful cell and killing it. This is not as easy as it sounds, because any new antibiotic needs to be both water soluble, so that it can travel easily through the bloodstream, and oily, in order to cross the pathogenic cell’s first line of defense, the cellular membrane. Water and oil, of course, don’t mix, and it’s difficult to design a drug that has enough of both characteristics to be effective.

The difficulty doesn’t stop there, either, because pathogenic cells have developed something called an “efflux pump,” that can recognize antibiotics and then safely excrete them from the cell, where they can’t do any harm. If the antibiotic can’t overcome the efflux pump and kill the cell, then the pathogen “remembers” what that specific antibiotic looks like and develops additional efflux pumps to efficiently handle it—in effect, becoming resistant to that particular antibiotic.

One path forward is to find a new antibiotic, or combinations of them, and try to stay one step ahead of the superbugs.

“Or, we can shift our strategy,” says Alejandro Heuck, associate professor of biochemistry and molecular biology at UMass Amherst and the paper’s senior author.

[…]

Like the pathogenic cell, host cells also have thick, difficult-to-penetrate cell walls. In order to breach them, pathogens have developed a syringe-like machine that first secretes two proteins, known as PopD and PopB. Neither PopD nor PopB individually can breach the cell wall, but the two proteins together can create a “translocon”—the cellular equivalent of a tunnel through the cell membrane. Once the tunnel is established, the pathogenic cell can inject other proteins that do the work of infecting the host.

This entire process is called the Type 3 secretion system—and none of it works without both PopB and PopD. “If we don’t try to kill the pathogen,” says Heuck, “then there’s no chance for it to develop resistance. We’re just sabotaging its machine. The pathogen is still alive; it’s just ineffective, and the host has time to use its natural defenses to get rid of the pathogen.”

[..]

Heuck and his colleagues realized that an enzyme class called the luciferases—similar to the ones that cause lightning bugs to glow at night—could be used as a tracer. They split the enzyme into two halves. One half went into the PopD/PopB proteins, and the other half was engineered into a host cell.

These engineered proteins and hosts can be flooded with different chemical compounds. If the host cell suddenly lights up, that means that PopD/PopB successfully breached the cellular wall, reuniting the two halves of the luciferase, causing them to glow. But if the cells stay dark? “Then we know which molecules break the translocon,” says Heuck.

Heuck is quick to point out that his team’s research has not only obvious applications in the world of pharmaceuticals and public health, but that it also advances our understanding of exactly how microbes infect healthy cells. “We wanted to study how worked,” he says, “and then suddenly we discovered that our findings can help solve a public-health problem.”

This research is published in the journal ACS Infectious Diseases.

More information: Hanling Guo et al, Cell-Based Assay to Determine Type 3 Secretion System Translocon Assembly in Pseudomonas aeruginosa Using Split Luciferase, ACS Infectious Diseases (2023). DOI: 10.1021/acsinfecdis.3c00482

Source: Research team discovers how to sabotage antibiotic-resistant ‘superbugs’

AI trained on millions of life stories can predict risk of early death

An artificial intelligence trained on personal data covering the entire population of Denmark can predict people’s chances of dying more accurately than any existing model, even those used in the insurance industry. The researchers behind the technology say it could also have a positive impact in early prediction of social and health problems – but must be kept out of the hands of big business.

Sune Lehmann Jørgensen at the Technical University of Denmark and his colleagues used a rich dataset from Denmark that covers education, visits to doctors and hospitals, any resulting diagnoses, income and occupation for 6 million people from 2008 to 2020.

They converted this dataset into words that could be used to train a large language model, the same technology that powers AI apps such as ChatGPT. These models work by looking at a series of words and determining which word is statistically most likely to come next, based on vast amounts of examples. In a similar way, the researchers’ Life2vec model can look at a series of life events that form a person’s history and determine what is most likely to happen next.

In experiments, Life2vec was trained on all but the last four years of the data, which was held back for testing. The researchers took data on a group of people aged 35 to 65, half of whom died between 2016 and 2020, and asked Life2vec to predict which who lived and who died. It was 11 per cent more accurate than any existing AI model or the actuarial life tables used to price life insurance policies in the finance industry.

The model was also able to predict the results of a personality test in a subset of the population more accurately than AI models trained specifically to do the job.

Jørgensen believes that the model has consumed enough data that it is likely to be able to shed light on a wide range of health and social topics. This means it could be used to predict health issues and catch them early, or by governments to reduce inequality. But he stresses that it could also be used by companies in a harmful way.

“Clearly, our model should not be used by an insurance company, because the whole idea of insurance is that, by sharing the lack of knowledge of who is going to be the unlucky person struck by some incident, or death, or losing your backpack, we can kind of share this this burden,” says Jørgensen.

But technologies like this are already out there, he says. “They’re likely being used on us already by big tech companies that have tonnes of data about us, and they’re using it to make predictions about us.”

Source: AI trained on millions of life stories can predict risk of early death | New Scientist

Internet Architecture Board hits out at US, EU, UK client-side scanning (spying on everything on your phone and pc all the time) plans – to save (heard it before?) kids

[…]

Apple brought widespread attention to this so-called client-side scanning in August 2021 when it announced plans to examine photos on iPhones and iPads before they were synced to iCloud, as a safeguard against the distribution of child sexual abuse material (CSAM). Under that plan, if someone’s files were deemed to be CSAM, the user could lose their iCloud account and be reported to the cops.

As the name suggests, client-side scanning involves software on a phone or some other device automatically analyzing files for unlawful photos and other content, and then performing some action – such as flagging or removing the documents or reporting them to the authorities. At issue, primarily, is the loss of privacy from the identification process – how will that work with strong encryption, and do the files need to be shared with an outside service? Then there’s the reporting process – how accurate is it, is there any human intervention, and what happens if your gadget wrongly fingers you to the cops?

The iGiant’s plan was pilloried by advocacy organizations and by customers on technical and privacy grounds. Ultimately Apple abandoned the effort and went ahead with offering iCloud encryption – a level of privacy that prompted political pushback at other tech titans.

Proposals for client-side scanning … mandate unrestricted access to private content and therefore undermine end-to-end encryption and bear the risk to become a widespread facilitator of surveillance and censorship

Client-side scanning has since reappeared, this time on legislative agendas. And the IAB – a research committee for the Internet Engineering Task Force (IETF), a crucial group of techies who help keep the ‘net glued together –thinks that’s a bad idea.

“A secure, resilient, and interoperable internet benefits the public interest and supports human rights to privacy and freedom of opinion and expression,” the IAB declared in a statement just before the weekend.

[…]

Specifically, the IAB cites Europe’s planned “Regulation laying down rules to prevent and combat child sexual abuse” (2022/0155(COD)), the UK Online Safety Act of 2023, and the US Earn-It Act, all of which contemplate regulatory regimes that have the potential to require the decryption of encrypted content in support of mandated surveillance.

The administrative body acknowledges the social harm done through the distribution of illegal content on the internet and the need to protect internet users. But it contends indiscriminate surveillance is not the answer.

The UK has already passed its Online Safety Act legislation, which authorizes telecom watchdog Ofcom to demand decryption of communications on grounds of child safety – though government officials have admitted that’s not technically feasible at the moment.

Europe, under fire for concealing those who have consulted on client-side scanning, and the US appears to be heading down a similar path.

For the IAB and IETF, client-side scanning initiatives echo other problematic technology proposals – including wiretaps, cryptographic backdoors, and pervasive monitoring.

“The IAB opposes technologies that foster surveillance as they weaken the user’s expectations of private communication which decreases the trust in the internet as the core communication platform of today’s society,” the organization wrote. “Mandatory client-side scanning creates a tool that is straightforward to abuse as a widespread facilitator of surveillance and censorship.”

[…]

Source: Internet Architecture Board hits out at client-side scanning • The Register

As soon as they take away privacy to save kids, you know they will expand the remit as governments have always done. The fact is that mass surveillance is not particularly effective, even with AI, except in making people feel watched and thus altering their behaviour. This feeling of always being spied upon is much much worse for whole generations of children than the tiny amount of sexual predators that may actually be caught.

How To Build Your Own Custom ChatGPT Bot

There’s something new and powerful for ChatGPT users to play around with: Custom GPTs. These bespoke bots are essentially more focused, more specific versions of the main ChatGPT model, enabling you to build something for a particular purpose without using any coding or advanced knowledge of artificial intelligence.

The name GPT stands for Generative Pre-trained Transformer, as it does in ChatGPT. Generative is the ability to produce new content outside of what an AI was trained on. Pre-trained indicates that it’s already been trained on a significant amount of material, and Transformer is a type of AI architecture adept at understanding language.

You might already be familiar with using prompts to style the responses of ChatGPT: You can tell it to answer using simple language, for example, or to talk to you as if it were an alien from another world. GPTs build on this idea, enabling you to create a bot with a specific personality.

You can build a GPT using a question-and-answer routine.
You can build a GPT using a question-and-answer routine.
Screenshot: ChatGPT

What’s more, you can upload your own material to add to your GPT’s knowledge banks—it might be samples of your own writing, for instance, or copies of reports produced by your company. GPTs will always have access to the data you upload to them and be able to browse the web at large.

GPTs are exclusive to Plus and Enterprise users, though everyone should get access soon. OpenAI plans to open a GPT store where you can sell your AI bot creations if you think others will find them useful, too. Think of an app store of sorts but for bespoke AI bots.

“GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others,” explains OpenAI in a blog post. “For example, GPTs can help you learn the rules to any board game, help teach your kids math, or design stickers.”

Getting started with GPT building

Assuming you have a Plus or Enterprise account, click Explore on the left of the web interface to see some example GPTs: There’s one to help you with your creative writing, for example, and one to produce a particular style of digital painting. When you’re ready to start building your own, click Create a GPT at the top.

There are two tabs to swap between: Create for building a GPT through a question-and-answer routine and Configure for more deliberate GPT production. If you’re just getting started, it’s best to stick with Create, as it’s a more user-friendly option and takes you step-by-step through the process.

Respond to the prompts of the GPT Builder bot to explain what you want the new GPT to be able to do: Explain certain concepts, give advice in specific areas, generate particular kinds of text or images, or whatever it is. You’ll be asked to give the GPT a name and choose an image for it, though you’ll get suggestions for these, too.

You’re able to test out your GPT as you build it.
You’re able to test out your GPT as you build it.
Screenshot: ChatGPT

As you answer the prompts from the builder, the GPT will begin to take form in the preview pane on the right—together with some example inputs that you might want to give to it. You might be asked about specific areas of expertise that you want the bot to have and the sorts of answers you want the bot to give in terms of their length and complexity. The building process will vary though, depending on the GPT you’re creating.

After you’ve worked through the basics of making a GPT, you can try it out and switch to the Configure tab to add more detail and depth. You’ll see that your responses so far have been used to craft a set of instructions for the GPT about its identity and how it should answer your questions. Some conversation starters will also be provided.

You can edit these instructions if you need to and click Upload files to add to the GPT’s knowledge banks (handy if you want it to answer questions about particular documents or topics, for instance). Most common document formats, including PDFs and Word files, seem to be supported, though there’s no official list of supported file types.

GPTs can be kept to yourself or shared with others.
GPTs can be kept to yourself or shared with others.
Screenshot: ChatGPT

The checkboxes at the bottom of the Configure tab let you choose whether or not the GPT has access to web browsing, DALL-E image creation, and code interpretation capabilities, so make your choices accordingly. If you add any of these capabilities, they’ll be called upon as and when needed—there’s no need to specifically ask for them to be used, though you can if you want.

When your GPT is working the way you want it to, click the Save button in the top right corner. You can choose to keep it to yourself or make it available to share with others. After you click on Confirm, you’ll be able to access the new GPT from the left-hand navigation pane in the ChatGPT interface on the web.

GPTs are ideal if you find yourself often asking ChatGPT to complete tasks in the same way or cover the same topics—whether that’s market research or recipe ideas. The GPTs you create are available whenever you need them, alongside access to the main ChatGPT engine, which you can continue to tweak and customize as needed.

Source: How To Build Your Own Custom ChatGPT Bot

MS Phi-2 small language model – outperforms many LLMs but fits on your laptop

We are now releasing Phi-2 (opens in new tab), a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.

With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 (opens in new tab) available in the Azure AI Studio model catalog to foster research and development on language models.

[..]

Phi-2 is a Transformer-based model with a next-word prediction objective, trained on 1.4T tokens from multiple passes on a mixture of Synthetic and Web datasets for NLP and coding. The training for Phi-2 took 14 days on 96 A100 GPUs. Phi-2 is a base model that has not undergone alignment through reinforcement learning from human feedback (RLHF), nor has it been instruct fine-tuned. Despite this, we observed better behavior with respect to toxicity and bias compared to existing open-source models that went through alignment (see Figure 3). This is in line with what we saw in Phi-1.5 due to our tailored data curation technique, see our previous tech report (opens in new tab) for more details on this. For more information about the Phi-2 model, please visit Azure AI | Machine Learning Studio (opens in new tab).

A barplot comparing the safety score of Phi-1.5, Phi-2, and Llama-7B models on 13 categories of the ToxiGen benchmark. Phi-1.5 achieves the highest score on all categories, Phi-2 achieves the second-highest scores and Llama-7B achieves the lowest scores across all categories.
Figure 3. Safety scores computed on 13 demographics from ToxiGen. A subset of 6541 sentences are selected and scored between 0 to 1 based on scaled perplexity and sentence toxicity. A higher score indicates the model is less likely to produce toxic sentences compared to benign ones.
[…]

With only 2.7 billion parameters, Phi-2 surpasses the performance of Mistral and Llama-2 models at 7B and 13B parameters on various aggregated benchmarks. Notably, it achieves better performance compared to 25x larger Llama-2-70B model on muti-step reasoning tasks, i.e., coding and math. Furthermore, Phi-2 matches or outperforms the recently-announced Google Gemini Nano 2, despite being smaller in size.
[…]

Model Size BBH Commonsense
Reasoning
Language
Understanding
Math Coding
Llama-2 7B 40.0 62.2 56.7 16.5 21.0
13B 47.8 65.0 61.9 34.2 25.4
70B 66.5 69.2 67.6 64.1 38.3
Mistral 7B 57.2 66.4 63.7 46.4 39.4
Phi-2 2.7B 59.2 68.8 62.0 61.1 53.7
Table 1. Averaged performance on grouped benchmarks compared to popular open-source SLMs.
Model Size BBH BoolQ MBPP MMLU
Gemini Nano 2 3.2B 42.4 79.3 27.2 55.8
Phi-2 2.7B 59.3 83.3 59.1 56.7
Table 2. Comparison between Phi-2 and Gemini Nano 2 Model on Gemini’s reported benchmarks.

Source: Phi-2: The surprising power of small language models – Microsoft Research

Ubisoft Just Delisted The Crew Game Without Warning – will be unplayable

Ubisoft’s The Crew, which was released in 2014 for Xbox 360, Xbox One, PlayStation 4, and PC, has been delisted without warning. Its servers will shut off next year, too. And because the game is an always-online experience, Ubisoft has confirmed that it will become unplayable once that happens.

The Crew is a massive open-world driving game set in a digital (not-to-scale) recreation of the United States of America. While its narrative was a weird, melodramatic tale of car gangs and criminals, the real reason to play The Crew was to go on road trips across Ubisoft Ivory Tower’s squished, but charming recreation of the USA. But, if you wanted to check out The Crew now or return to its odd world, you better hurry up, because in 2024 it all dies.

On December 14, Ubisoft delisted The Crew from all console storefronts and on Steam. This happened early in the day, before Ubisoft had officially commented on the situation, leading to speculation as to what it meant. On Thursday, the publisher posted a letter on its website confirming the game has been delisted and its servers will be shut down on April 1, 2024.

Because the game is always online, that means once the servers are dead so is the game for everyone who bought it, digitally or physically.

“We understand this may be disappointing for players still enjoying the game,” Ubisoft said. “But it has become a necessity due to upcoming server infrastructure and licensing constraints.”

Ubisoft says folks who have recently bought The Crew might be eligible for a refund but stopped short of offering anything else to players who will permanently lose access to a game they purchased just a few months from now.

“Decommissioning a game, and especially our first one, is not something we take lightly,” Ubisoft said. “Our goal remains to provide the best action-driving gameplay experience for players and to deliver on it, we are continuing to provide new content and support for The Crew 2 and the recently launched The Crew Motorfest.

While I don’t expect companies to support online games for decades, it would be nice if games were built in a way that, when the servers do eventually die, the game can be updated into an offline experience. Otherwise, in 10 to 15 years, we are going to have a lot of games that won’t be playable anymore, even if you own the disc.

Source: Ubisoft Just Delisted The Crew Without Warning

Twitch Allows ‘Artistic Nudity’, But because US males are scared of seeing skin, rolls back the rules

Streaming platform Twitch recently announced a change to its sexual content policies that allowed some forms of fictionalized nudity—such as digital characters, sculptures, or drawings—as long as it was properly labeled. But now, just a few days later, it’s rolling back these changes and has apologized to the community.

Earlier this month, a new Twitch trend kicked off a firestorm of discourse and angry men yelling about women. Some women were streaming themselves using certain camera angles to appear topless. This new “topless meta”—like the hot tub meta from before—saw some women successfully trying out the trend on the streaming site, some getting banned, and a lot more dudes getting very angry about it all. In response, Twitch stepped in on December 13 and updated its sexual content policies, hoping to streamline some confusion and keep correctly labeled adult content off the homepage, but still on the site. It also officially allowed digital and fictionalized nudity. And two days later, Twitch seems to regret that specific choice.

In a post on December 15, Twitch’s CEO, Dan Clancy, admitted that its new policy changes allowing fictional nudity had led to a small uptick in people making content that broke the rules, but had also led to an influx of nudity that did follow the rules. The community response to all this new, totally-allowed artistic nudity was strong and not completely positive, leading to Twitch reverting its changes.

“So, effective today, we are rolling back the artistic nudity changes,” Clancy said. “Moving forward, depictions of real or fictional nudity won’t be allowed on Twitch, regardless of the medium. This restriction does not apply to Mature-rated games. You can find emote-specific standards for nudity and sexual content in the Emote Guidelines.”

Twitch suggested the company went “too far” when altering the nudity policy. It further explained that digital nudity presents a “unique challenge” due to AI-generated images which can look photorealistic but are still digital, fictionalized characters, technically.

While Twitch is rolling back the artistic nudity guidelines, the company did clarify that the other changes involving exotic dancing, body painting or content focused on certain clothed parts of the body weren’t being reverted.

“While I wish we would have predicted this outcome, part of our job is to make adjustments that serve the community,” Clancy said. “I apologize for the confusion that this update has caused.”

Source: Twitch Allows ‘Artistic Nudity’, Immediately Regrets It

What is wrong with you Americans?!

Posted in Sex

Google Will Stop Telling Law Enforcement Which Users Were Near a Crime, start saving location data on the mobile device instead of their servers. But not really though. And Why?

So most of the breathless reporting on Googles “Updates to Location History and new controls coming soon to Maps” is a bit like this below. However Google itself in “Manage your Location History” says that if you have location history on, it will also save it to it’s servers. There is no mention of encryption.

Alphabet Inc.’s Google is changing its Maps tool so that the company no longer has access to users’ individual location histories, cutting off its ability to respond to law enforcement warrants that ask for data on everyone who was in the vicinity of a crime.

Google is changing its Location History feature on Google Maps, according to a blog post this week. The feature, which Google says is off by default, helps users remember where they’ve been. The company said Thursday that for users who have it enabled, location data will soon be saved directly on users’ devices, blocking Google from being able to see it, and, by extension, blocking law enforcement from being able to demand that information from Google.

“Your location information is personal,” said Marlo McGriff, director of product for Google Maps, in the blog post. “We’re committed to keeping it safe, private and in your control.”

The change comes three months after a Bloomberg Businessweek investigation that found police across the US were increasingly using warrants to obtain location and search data from Google, even for nonviolent cases, and even for people who had nothing to do with the crime.

“It’s well past time,” said Jennifer Lynch, the general counsel at the Electronic Frontier Foundation, a San Francisco-based nonprofit that defends digital civil liberties. “We’ve been calling on Google to make these changes for years, and I think it’s fantastic for Google users, because it means that they can take advantage of features like location history without having to fear that the police will get access to all of that data.”

Google said it would roll out the changes gradually through the next year on its own Android and Apple Inc.’s iOS mobile operating systems, and that users will receive a notification when the update comes to their account. The company won’t be able to respond to new geofence warrants once the update is complete, including for people who choose to save encrypted backups of their location data to the cloud.“It’s a good win for privacy rights and sets an example,” said Jake Laperruque, deputy director of the security and surveillance project at the Center for Democracy & Technology. The move validates what litigators defending the privacy of location data have long argued in court: that just because a company might hold data as part of its business operations, that doesn’t mean users have agreed the company has a right to share it with a third party.

Lynch, the EFF lawyer, said that while Google deserves credit for the move, it’s long been the only tech company that that the EFF and other civil-liberties groups have seen responding to geofence warrants. “It’s great that Google is doing this, but at the same time, nobody else has been storing and collecting data in the same way as Google,” she said. Apple, which also has an app for Maps, has said it’s technically unable to supply the sort of location data police want.

There’s still another kind of warrant that privacy advocates are concerned about: so-called reverse keyword search warrants, where police can ask a technology company to provide data on the people who have searched for a given term. “Search queries can be extremely sensitive, even if you’re just searching for an address,” Lynch said.

Source: Google Will Stop Telling Law Enforcement Which Users Were Near a Crime

The question is – why now? The market for location data is estimated at around $12 billion (source: There’s a Murky Multibillion-Dollar Market for Your Phone’s Location Data) If you look a tiny little bit, you see the government asking for it all the time, and the fines issued for breaching location data privacy seem to be tiny compared to the money made by selling it.

Google will be changing the name of Location History as well to Timeline – and will be saving your location to it’s servers (see heading When Location History is on)

:

Manage your Location History

In the coming months, the Location History setting name will change to Timeline. If Location History is turned on for your account, you may find Timeline in your app and account settings.

Location History is a Google Account setting that creates Timeline, a personal map that helps you remember:

  • Places you go
  • Routes to destinations
  • Trips you take

It can also give you personalized experiences across Google based on where you go.

When Location History is on, even when Google apps aren’t in use, your precise device location is regularly saved to:

  • Your devices
  • Google servers

To make Google experiences helpful for everyone, we may use your data to:

  • Show information based on anonymized location data, such as:
    • Popular times
    • Environmental insights
  • Detect and prevent fraud and abuse.
  • Improve and develop Google services, such as ads products.
  • Help businesses determine if people visit their stores because of an ad, if you have Web & App Activity turned on.
    • We share only anonymous estimates, not personal data, with businesses.
    • This activity can include info about your location from your device’s general area and IP address.

Learn more about how Google uses location data.

Things to know about Location History:

  • Location History is off by default. We can only use it if you turn Location History on.
  • You can turn off Location History at any time in your Google Account’s Activity controls.
  • You can review and manage your Location History. You can:
    • Review places you’ve been in Google Maps Timeline.
    • Edit or delete your Location History anytime.

Important: Some of these steps work only on Android 8.0 and up. Learn how to check your Android version.

Turn Location History on or off

You can turn off Location History for your account at any time. If you use a work or school account, your administrator needs to make this setting available for you. If they do, you’ll be able to use Location History as any other user.

  1. Go to the “Location History” section of your Google Account.
  2. Choose whether your account or your devices can report Location History to Google.
    • Your account and all devices: At the top, turn Location History on or off.
    • Only a certain device: Under “This device” or “Devices on this account,” turn the device on or off.

When Location History is on

Google can estimate your location with:

  • Signals like Wi-Fi and mobile networks
  • GPS
  • Sensor information

Your device location may also periodically be used in the background. When Location History is on, even when Google apps aren’t in use, your device’s precise location is regularly saved to:

  • Your devices
  • Google servers

When you’re signed in with your Google Account, it saves the Location History of each device with the setting “Devices on this account” turned on You can find this setting in the Location History settings on your Google Account.

You can choose which devices provide their location data to Location History. Your settings don’t change for other location services on your device, such as:

When Location History is off

Your device doesn’t save its location to your Location History.

  • You may have previous Location History data in your account. You can manually delete it anytime.
  • Your settings don’t change for other location services on your device, such as:
  • If settings like Web and App Activity are on but you turn off Location History or delete location data from Location History, your Google Account may still save location data as part of your use of other Google sites, apps, and services. This activity can include info about your location from your device’s general area and IP address.

Delete Location History

You can manage and delete your Location History information with Google Maps Timeline. You can choose to delete all of your history, or only parts of it.

Important: When you delete Location History information from Timeline, you won’t be able to see it again.

Automatically delete your Location History

You can choose to automatically delete Location History that’s older than 3 months, 18 months, or 36 months.

What happens after you delete some or all Location History

If you delete some or all of your Location History, personalized experiences across Google may degrade or or be lost. For example, you may lose:

  • Recommendations based on places you visit
  • Real-time information about when best to leave for home or work to beat traffic

Important: If you have other settings like Web & App Activity turned on and you pause Location History or delete location data from Location History, you may still have location data saved in your Google Account as part of your use of other Google sites, apps, and services. For example, location data may be saved as part of activity on Search and Maps when your Web & App Activity setting is on, and included in your photos depending on your camera app settings. Web & App Activity can include info about your location from your device’s general area and IP address.

Learn about use & diagnostics for Location History

After you turn on Location History, your device may send diagnostic information to Google about what works or doesn’t work for Location History. Google processes any information it collects under Google’s privacy policy.

 

Learn more about other location settings

Source: Manage your Location History

 

 

Health misinformation is rampant on social media

This article was originally featured on The Conversation.

The global anti-vaccine movement and vaccine hesitancy that accelerated during the COVID-19 pandemic show no signs of abating.

According to a survey of U.S. adults, Americans in October 2023 were less likely to view approved vaccines as safe than they were in April 2021. As vaccine confidence falls, health misinformation continues to spread like wildfire on social media and in real life.

I am a public health expert in health misinformationscience communication and health behavior change.

In my view, we cannot underestimate the dangers of health misinformation and the need to understand why it spreads and what we can do about it. Health misinformation is defined as any health-related claim that is false based on current scientific consensus.

False claims about vaccines

Vaccines are the No. 1 topic of misleading health claims. Some common myths about vaccines include:

The costs of health misinformation

Beliefs in such myths have come at the highest cost.

An estimated 319,000 COVID-19 deaths that occurred between January 2021 and April 2022 in the U.S. could have been prevented if those individuals had been vaccinated, according to a data dashboard from the Brown University School of Public Health. Misinformation and disinformation about COVID-19 vaccines alone have cost the U.S. economy an estimated US$50 million to $300 million per day in direct costs from hospitalizations, long-term illness, lives lost and economic losses from missed work.

Though vaccine myths and misunderstandings tend to dominate conversations about health, there is an abundance of misinformation on social media surrounding diets and eating disorders, smoking or substance use, chronic diseases and medical treatments.

My team’s research and that of others show that social media platforms have become go-to sources for health information, especially among adolescents and young adults. However, many people are not equipped to maneuver the maze of health misinformation.

For example, an analysis of Instagram and TikTok posts from 2022 to 2023 by The Washington Post and the nonprofit news site The Examination found that the food, beverage and dietary supplement industries paid dozens of registered dietitian influencers to post content promoting diet soda, sugar and supplements, reaching millions of viewers. The dietitians’ relationships with the food industry were not always made clear to viewers.

Studies show that health misinformation spread on social media results in fewer people getting vaccinated and can also increase the risk of other health dangers such as disordered eating and unsafe sex practices and sexually transmitted infections. Health misinformation has even bled over into animal health, with a 2023 study finding that 53% of dog owners surveyed in a nationally representative sample report being skeptical of pet vaccines.

Health misinformation is on the rise

One major reason behind the spread of health misinformation is declining trust in science and government. Rising political polarization, coupled with historical medical mistrust among communities that have experienced and continue to experience unequal health care treatment, exacerbates preexisting divides.

The lack of trust is both fueled and reinforced by the way misinformation can spread today. Social media platforms allow people to form information silos with ease; you can curate your networks and your feed by unfollowing or muting contradictory views from your own and liking and sharing content that aligns with your existing beliefs and value systems.

By tailoring content based on past interactions, social media algorithms can unintentionally limit your exposure to diverse perspectives and generate a fragmented and incomplete understanding of information. Even more concerning, a study of misinformation spread on Twitter analyzing data from 2006 to 2017 found that falsehoods were 70% more likely to be shared than the truth and spread “further, faster, deeper and more broadly than the truth” across all categories of information.

How to combat misinformation

The lack of robust and standardized regulation of misinformation content on social media places the difficult task of discerning what is true or false information on individual users. We scientists and research entities can also do better in communicating our science and rebuilding trust, as my colleague and I have previously written. I also provide peer-reviewed recommendations for the important roles that parents/caregivers, policymakers and social media companies can play.

Below are some steps that consumers can take to identify and prevent health misinformation spread:

  • Check the source. Determine the credibility of the health information by checking if the source is a reputable organization or agency such as the World Health Organization, the National Institutes of Health or the Centers for Disease Control and Prevention. Other credible sources include an established medical or scientific institution or a peer-reviewed study in an academic journal. Be cautious of information that comes from unknown or biased sources.
  • Examine author credentials. Look for qualifications, expertise and relevant professional affiliations for the author or authors presenting the information. Be wary if author information is missing or difficult to verify.
  • Pay attention to the date. Scientific knowledge by design is meant to evolve as new evidence emerges. Outdated information may not be the most accurate. Look for recent data and updates that contextualize findings within the broader field.
  • Cross-reference to determine scientific consensus. Cross-reference information across multiple reliable sources. Strong consensus across experts and multiple scientific studies supports the validity of health information. If a health claim on social media contradicts widely accepted scientific consensus and stems from unknown or unreputable sources, it is likely unreliable.
  • Question sensational claims. Misleading health information often uses sensational language designed to provoke strong emotions to grab attention. Phrases like “miracle cure,” “secret remedy” or “guaranteed results” may signal exaggeration. Be alert for potential conflicts of interest and sponsored content.
  • Weigh scientific evidence over individual anecdotes. Prioritize information grounded in scientific studies that have undergone rigorous research methods, such as randomized controlled trials, peer review and validation. When done well with representative samples, the scientific process provides a reliable foundation for health recommendations compared to individual anecdotes. Though personal stories can be compelling, they should not be the sole basis for health decisions.
  • Talk with a health care professional. If health information is confusing or contradictory, seek guidance from trusted health care providers who can offer personalized advice based on their expertise and individual health needs.
  • When in doubt, don’t share. Sharing health claims without validity or verification contributes to misinformation spread and preventable harm.

All of us can play a part in responsibly consuming and sharing information so that the spread of the truth outpaces the false.

Monica Wang receives funding from the National Institutes of Health.

Source: Health misinformation is rampant on social media | Popular Science

Orbit Fab wants to create refueling stations in space, standardised fuel ports, refineries

[…] humans have sent over 15,000 satellites into orbit. Just over half are still functioning; the rest, after running out of fuel and ending their serviceable life, have either burned up in the atmosphere or are still orbiting the planet as useless hunks of metal.

[…]

That has created an aura of space junk around the planet, made up of 36,500 objects larger than 10 centimeters (3.94 inches) and a whopping 130 million fragments up to 1 centimeter (0.39 inches).

[…]

“Right now you can’t refuel a satellite on orbit,” says Daniel Faber, CEO of Orbit Fab. But his Colorado-based company wants to change that.

[..]

the lack of fuel creates a whole paradigm where people design their spacecraft missions around moving as little as possible.

“That means that we can’t have tow trucks in orbit to get rid of any debris that happens to be left. We can’t have repairs and maintenance

[…]

Orbit Fab has no plans to address the existing fleet of satellites. Instead, it wants to focus on those that have yet to launch, and equip them with a standardized port — called RAFTI, for Rapid Attachable Fluid Transfer Interface — which would dramatically simplify the refueling operation, keeping the price tag down.

“What we’re looking at doing is creating a low-cost architecture,” says Faber. “There’s no commercially available fuel port for refueling a satellite in orbit yet. For all the big aspirations we have about a bustling space economy, really, what we’re working on is the gas cap — we are a gas cap company.”

A rendering of the future Orbit Fab Shuttle, which will deliver fuel to satellites in need directly on orbit.

Orbit Fab, which advertizes itself with the tagline “gas stations in space,” is working on a system that includes the fuel port, refueling shuttles — which would deliver the fuel to a satellite in need — and refueling tankers, or orbital gas stations, which the shuttles could pick up the fuel from. It has advertized a price of $20 million for on-orbit delivery of hydrazine, the most common satellite propellant.

In 2018, the company launched two testbeds to the International Space Station to test the interfaces, the pumps and the plumbing. In 2021 it launched Tanker-001 Tenzing, a fuel depot demonstrator that informed the design of the current hardware.

The next launch is now scheduled for 2024. “We are delivering fuel in geostationary orbit for a mission that is being undertaken by the Air Force Research Lab,” says Faber. “At the moment, they’re treating it as a demonstration, but it’s getting a lot of interest from across the US government, from people that realize the value of refueling.”

Orbit Fab’s first private customer will be Astroscale, a Japanese satellite servicing company that has developed the first satellite designed for refueling. Called LEXI, it will mount RAFTI ports and is currently scheduled to launch in 2026.

[…]

He adds that once the pattern of sending and delivering fuel in orbit is established, the next step is to start making the fuel there. “In 10 or 15 years, we’d like to be building refineries in orbit,” he says, “processing material that is launched from the ground into a range of chemicals that people want to buy: air and water for commercial space stations, 3D printer feedstock minerals to grow plants. We want to be the industrial chemical supplier to the emerging commercial space industry.”

Source: Orbit Fab wants to create ‘gas stations in space’ | CNN

Copyright Troll Porn Company Makes Millions By Shaming Potential Porn Consumers

In 1999 Los Angeles Times reporter Michael Hiltzik co-authored a Pulitzer Prize-winning story. Now a business columnist for the Times, he writes that a Southern California maker of pornographic films named Strike 3 Holdings is also “a copyright troll,” according to U.S. Judge Royce C. Lamberth: Lamberth cwrote in 2018, “Armed with hundreds of cut-and-pasted complaints and boilerplate discovery motions, Strike 3 floods this courthouse (and others around the country) with lawsuits smacking of extortion. It treats this Court not as a citadel of justice, but as an ATM.” He likened its litigation strategy to a “high-tech shakedown.” Lamberth was not speaking off the cuff. Since September 2017, Strike 3 has filed more than 12,440 lawsuits in federal courts alleging that defendants infringed its copyrights by downloading its movies via BitTorrent, an online service on which unauthorized content can be accessed by almost anyone with a computer and internet connection.

That includes 3,311 cases the firm filed this year, more than 550 in federal courts in California. On some days, scores of filings reach federal courthouses — on Nov. 17, to select a date at random, the firm filed 60 lawsuits nationwide… Typically, they are settled for what lawyers say are cash payments in the four or five figures or are dismissed outright…

It’s impossible to pinpoint the profits that can be made from this courthouse strategy. J. Curtis Edmondson, a Portland, Oregon, lawyer who is among the few who pushed back against a Strike 3 case and won, estimates that Strike 3 “pulls in about $15 million to $20 million a year from its lawsuits.” That would make the cases “way more profitable than selling their product….” If only one-third of its more than 12,000 lawsuits produced settlements averaging as little as $5,000 each, the yield would come to $20 million… The volume of Strike 3 cases has increased every year — from 1,932 in 2021 to 2,879 last year and 3,311 this year.

What’s really needed is a change in copyright law to bring the statutory damages down to a level that truly reflects the value of a film lost because of unauthorized downloading — not $750 or $150,000 but perhaps a few hundred dollars.

Anone of the lawsuits go to trial. Instead ISPs get a subpoena demanding the real-world address and name behind IP addresses “ostensibly used to download content from BitTorrent…” according to the article. Strike 3 will then “proceed by sending a letter implicitly threatening the subscriber with public exposure as a pornography viewer and explicitly with the statutory penalties for infringement written into federal copyright law — up to $150,000 for each example of willful infringement and from $750 to $30,0000 otherwise.”

A federal judge in Connecticut wrote last year that “Given the nature of the films at issue, defendants may feel coerced to settle these suits merely to prevent public disclosure of their identifying information, even if they believe they have been misidentified.”

Source: Copyright Troll’ Porn Company ‘Makes Millions By Shaming Porn Consumers’ (yahoo.com)

Nissan 300ZX Owner Turns Ford Digital Dash Into Wicked Retro Display – why don’t all automakers allow digital dash theming?!

You’ve got to love a project with amazing elements of both art and science. Nissan 300ZX enthusiast and talented tinkerer Kelvin Elsner has been working on this custom vaporwave-aesthetic digital gauge cluster for months. It’s not in a car yet, but it’s an amazing design and computer coding feat for one guy in his home shop.

<em><a href="https://www.youtube.com/@BlitzenDesignLab">Blitzen Design Lab</a>/YouTube</em>

Blitzen Design Lab/YouTube

Elsner and I are in at least one of the same Z31 groups (that’s the chassis code for the ’80s 300ZX) on Facebook and every once in a while over the last few years, he’s dropped an update on his quest to make a unique, modern, digital gauge cluster for his Z car. This week, he dropped a cute video with a great overview of his project which made me realize just how complex this undertaking has been. It even made its way to another car site before I had a chance to write it up (nice grab, Lewin)!

Anyway, Elsner here has taken a digital gauge cluster from a modern Ford, reprogrammed it, designed a super cool physical overlay for it, and set it up to be an incredibly cool retro-futuristic upgrade for his 300ZX. Not only that, but he worked out a security-encoded ignition key and retrofitted a power mirror-tilt control to act as a controller for the screen! Watch how he did it here:

The pacing of this video is more mellow than what usually goes viral on YouTube, which is another reason why I like it so much. I strongly recommend sitting down for an earnest end-to-end watch.

The Z31 famously had an optional digital dash when it was new, but “digital” by ’80s standards was more like a calculator display. Elsner’s system retains the vaporwave caricature aesthetic leveraging the modern, crisp resolution of a Ford Explorer gauge cluster. The 3D overlay is really what brings it home for me, though.

Here's what the factory Z31 digi-dash looks like. It's pretty cool in its own right. <em><a href="https://www.youtube.com/@michaelsmotorcars8916">Michael's Motor Cars</a>/YouTube</em>

Here’s what the factory Z31 digi-dash looks like. It’s pretty cool in its own right. Michael’s Motor Cars/YouTube

You can add all the colors and animations you want, but that physical depth is what makes a gauge cluster visually interesting and distinctive. Take note, automakers.

I shot Elsner some messages on Facebook about his project. I’m grateful to say he replied, so I can share some elaborations on what he presented in the video. I’ll trim and paraphrase the details he shared.

He’s not an automotive engineer by trade, considers this project a hobby, and doesn’t currently have any plans for mass production or marketing for sale.

As far as the time investment, the first pictures of the project go far as back as 2019. “Time-wise I’d say it’s at least a good few months worth of work but it was spread out over a couple years, I only really had spare time in the evenings and definitely worked on it off and on,” Elsner wrote me on Facebook Messenger. And of course, it’s not running in a car yet, so we can’t quite say the mission is complete.

The part of this project I understand the least is how the display was hacked to show this cool synthwave sunset and move the gauges around. I’ll drop Elsner’s quote about firmware here wholesale so I don’t incorrectly paraphrase:

“The firmware stuff I stumbled on when I was researching how to get the cluster to work—you could get this cluster in Mondeos, but not in the Fusion in North America. It turns out a lot of people were swapping them in, and in the forums I was browsing I found that some folks had some modified software with pictures of their cars added into them.

“I was on a hunt for a while trying to figure out how to do the same, and I eventually came across a post in a Facebook group where some folks were discussing the subject, and someone finally made mention and linked to the software that was able to unpack the firmware graphics.

“This was called PimpMyFord, and then I used Forscan (another program that can be used to adjust module configurations on Ford models) to upload the firmware.”

Elsner used this Ford mirror control as a joystick, or mouse, so a user can cycle through menus. <em><a href="https://www.youtube.com/@BlitzenDesignLab">Blitzen Design Lab</a>/YouTube</em>

Elsner used this Ford mirror control as a joystick, or mouse, so a user can cycle through menus. Blitzen Design Lab/YouTube

Another question I had after watching the video was—how the heck was this modern Ford gauge cluster going to interpret information from the sensors and senders in an ’80s Nissan? The Z31 I used to own had a cable-driven speedometer and a dang miniature phonograph to play the “door is open” warnings. Seems like translating those signals would be a little more involved than a USB to micro-USB adapter. I asked about that and Elsner added more detail:

“On the custom board I made, I have some microcontrollers that read the analog voltages and signals that were originally provided to the stock cluster, and they convert those readings into digital data. This is then used to construct canbus messages that imitate the original Ford ones, which are fed to the Ford cluster through an onboard transceiver … So as far as the cluster is concerned, it’s still connected to an Explorer that just has some weird things to say,” he wrote.

Here I am thinking I’m Tony Stark when I hack up a bit of square stock to make a fog light bracket, while this dude is creating a completely bespoke human-machine interface that looks cool enough to be a big-budget movie prop.

With the extinction of combustion engines looming as a near-future possibility, it’s easy to be cynical about the future of cars as a hobby. But projects like this get me fired up and optimistic that there’s still uncharted territory for creativity to thrive in car customization.

Check out Kelvin Elsner’s YouTube channel Blitzen Design Lab—he’s clearly up to some really cool stuff and I can’t wait to see what he comes up with next.

Source: Nissan 300ZX Owner Turns Ford Digital Dash Into Wicked Retro Display

GE’s Breakthrough In Dual Mode Ramjet with Rotating Detonating Hypersonic Propulsion

GE Aerospace says it successfully demonstrated an advanced jet propulsion concept that involves a dual-mode ramjet design utilizing rotating detonation combustion. This could offer a pathway to the development of new aircraft and missiles capable of flying efficiently at high supersonic and even hypersonic speeds across long distances.

A press release that GE Aerospace put out today offers new details about what it says “is believed to be a world-first hypersonic dual-mode ramjet (DMRJ) rig test with rotating detonation combustion (RDC) in a supersonic flow stream.” Hypersonic speed is defined as anything above Mach 5. Amy Gowder, President and CEO of the Defense & Systems division of GE Aerospace, previously disclosed this project, but offered more limited information, at this year’s Paris Air Show in June.

A rendering of a rotating detonation engine design. <em>USAF/AFRL via Aviation Week</em>

A rendering of a rotating detonation engine design. USAF/AFRL via Aviation Week

“A typical air-breathing DMRJ propulsion system can only begin operating when the vehicle achieves supersonic speeds of greater than Mach 3,” the press release explains. “GE Aerospace engineers are working on a rotating detonation-enabled dual mode ramjet that is capable of operating at lower Mach numbers, enabling the flight vehicle to operate more efficiently and achieve longer range.”

“RDC [rotating detonation combustion] enables higher thrust generation more efficiently, at an overall smaller engine size and weight, by combusting the fuel through detonation waves instead of a standard combustion system that powers traditional jet engines today,” the press release adds.

To elaborate, in most traditional gas turbines, including turbofan and turbojet engines, air is fed in from an inlet and compressed, and then is mixed with fuel and burned via deflagration (where combustion occurs at a subsonic rate) in a combustion chamber. This process creates the continuous flow of hot, high-pressure air needed to make the whole system run.

A rotating detonation engine (which involves combustion that happens at a supersonic rate) instead “starts with one cylinder inside another larger one, with a gap between them and some small holes or slits through which a detonation fuel mix can be pushed,” according to a past article on the general concept from New Atlas. “Some form of ignition creates a detonation in that annular gap, which creates gases that are pushed out one end of the ring-shaped channel to produce thrust in the opposite direction. It also creates a shockwave that propagates around the channel at around five times the speed of sound, and that shockwave can be used to ignite more detonations in a self-sustaining, rotating pattern if fuel is added in the right spots at the right times.”

The video below offers a more detailed walkthrough of the rotating detonation engine concept.

[
[…]

In principle, rotating detonation requires less fuel to produce the same level of power/thrust as combustion via deflagration. The resulting sustained shockwave builds its own pressure, as well, leading to even greater fuel efficiency. Pressure is steadily lost during deflagration.

In addition, rotating detonation typically requires far fewer moving parts than are needed in traditional gas turbines. In theory, this should all allow for rotating detonation engine designs that are significantly smaller, lighter, and less complex than existing types with similar very high power/thrust output.

[…]

“GE engineers are now testing the transition mode at high-supersonic speeds as thrust transitions from the RDE-equipped turbine and the dual-mode ramjet/scramjet,” GE Aerospace’s Gowder said in Paris earlier this year, according to Aviation Week.

[…]

A combined ramjet and rotating detonation concept could be an especially big deal for future missiles, like the ones DARPA’s Gambit project is envisioning, and possibly high-speed air vehicles for reconnaissance use. This propulsion arrangement could allow for greater efficiency and lighter (and potentially smaller) airframes, which in turn allow for greater performance — especially in terms of range — and/or payload capacity. If rotating detonation combustion can reduce the minimum speed required to get the ramjet working, this would reduce the amount of initial boost such a system would need at the outset, too. This would mean a smaller overall package. All of this opens doors to new levels of operational flexibility.

This new engine concept could also potentially become one component of what is known as a turbine-based combined cycle (TBCC) engine arrangement, of which much talk over the years about in recent years. Most TBCC design concepts revolve around combinations of advanced ramjets or scramjets for use at high speeds and traditional turbojet engines that work better a low speeds.

A graphical depiction of a notional turbine-based combined cycle engine arrangement. <em>Lockheed Martin</em>

A graphical depiction of a notional turbine-based combined cycle engine arrangement. Lockheed Martin

A practical TBCC concept of any kind has long been a holy grail technology when it comes to designing very high-speed aircraft. A propulsion system that allows for this kind of high and low-speed flexibility would mean an aircraft could take off from and land on any suitable existing runway, but also be capable of sustained high-supersonic or even hypersonic speeds in the middle portion of a flight.

[…]

 

Source: GE’s Breakthrough In ‘Detonating’ Hypersonic Propulsion Is A Big Deal

Artificial intelligence and copyright – WIPO

[…]

Robotic artists have been involved in various types of creative works for a long time. Since the 1970s computers have been producing crude works of art, and these efforts continue today. Most of these computer-generated works of art relied heavily on the creative input of the programmer; the machine was at most an instrument or a tool very much like a brush or canvas

[…]

. When applied to art, music and literary works, machine learning algorithms are actually learning from input provided by programmers. They learn from these data to generate a new piece of work, making independent decisions throughout the process to determine what the new work looks like. An important feature for this type of artificial intelligence is that while programmers can set parameters, the work is actually generated by the computer program itself – referred to as a neural network – in a process akin to the thought processes of humans.

[…]

Creating works using artificial intelligence could have very important implications for copyright law. Traditionally, the ownership of copyright in computer-generated works was not in question because the program was merely a tool that supported the creative process, very much like a pen and paper. Creative works qualify for copyright protection if they are original, with most definitions of originality requiring a human author. Most jurisdictions, including Spain and Germany, state that only works created by a human can be protected by copyright.

But with the latest types of artificial intelligence, the computer program is no longer a tool; it actually makes many of the decisions involved in the creative process without human intervention.

Commercial impact

One could argue that this distinction is not important, but the manner in which the law tackles new types of machine-driven creativity could have far-reaching commercial implications. Artificial intelligence is already being used to generate works in music, journalism and gaming. These works could in theory be deemed free of copyright because they are not created by a human author. As such, they could be freely used and reused by anyone. That would be very bad news for the companies selling the works.

[…]

If developers doubt whether creations generated through machine learning qualify for copyright protection, what is the incentive to invest in such systems? On the other hand, deploying artificial intelligence to handle time-consuming endeavors could still be justified, given the savings accrued in personnel costs, but it is too early to tell.

[…]

There are two ways in which copyright law can deal with works where human interaction is minimal or non-existent. It can either deny copyright protection for works that have been generated by a computer or it can attribute authorship of such works to the creator of the program.

[…]

Should the law recognize the contribution of the programmer or the user of that program? In the analogue world, this is like asking whether copyright should be conferred on the maker of a pen or the writer. Why, then, could the existing ambiguity prove problematic in the digital world? Take the case of Microsoft Word. Microsoft developed the Word computer program but clearly does not own every piece of work produced using that software. The copyright lies with the user, i.e. the author who used the program to create his or her work. But when it comes to artificial intelligence algorithms that are capable of generating a work, the user’s contribution to the creative process may simply be to press a button so the machine can do its thing.

[…]

Monumental advances in computing and the sheer amount of available computational power may well make the distinction moot; when you give a machine the capacity to learn styles from large datasets of content, it will become ever better at mimicking humans. And given enough computing power, soon we may not be able to distinguish between human-generated and machine-generated content. We are not yet at that stage, but if and when we do get there, we will have to decide what type of protection, if any, we should give to emergent works created by intelligent algorithms with little or no human intervention

[…]

 

Source: Artificial intelligence and copyright

It’s interesting to read that in 2017 the training material used is considered irrelevant to the output – as it should be. The books and art that go into AI’s are just like the books and art that go into humans. The derived works that AI’s and humans make belong to them, not to the content it is based on. And just because an AI – just like a human – can quote the original source material doesn’t change that.

AI Doomsayers: Debunking the Despair

Shortly after ChatGPT’s release, a cadre of critics rose to fame claiming AI would soon kill us. As wondrous as a computer speaking in natural language might be, it could use that intelligence to level the planet. The thinking went mainstream via letters calling for research pauses and “60 Minutes” interviews amplifying existential concerns. Leaders like Barack Obama publicly worried about AI autonomously hacking the financial system — or worse. And last week, President Biden issued an executive order imposing some restraints on AI development.

AI Experts Dismiss Doom, Defend Progress

That was enough for several prominent AI researchers who finally started pushing back hard after watching the so-called AI Doomers influence the narrative and, therefore, the field’s future. Andrew Ng, the soft-spoken co-founder of Google Brain, said last week that worries of AI destruction had led to a “massively, colossally dumb idea” of requiring licenses for AI work. Yann LeCun, a machine-learning pioneer, eviscerated research-pause letter writer Max Tegmark, accusing him of risking “catastrophe” by potentially impeding AI progress and exploiting “preposterous” concerns. A new paper earlier this month indicated large language models can’t do much beyond their training, making the doom talk seem overblown. “If ‘emergence’ merely unlocks capabilities represented in pre-training data,” said Princeton professor Arvind Narayanan, “the gravy train will run out soon.”

 

Three robots with glowing red eyes indicating AI is a threat to human existence.
A new paper earlier this month indicated large language models can’t do much beyond their training, making the doom talk seem overblown.OleCNX on Adobe Stock Photos

 

Related Article: Can We Fix Artificial Intelligence’s Serious PR Problem?

AI Doom Hype Benefits Tech Giants

Worrying about AI safety isn’t wrongheaded, but these Doomers’ path to prominence has insiders raising eyebrows. They may have come to their conclusions in good faith, but companies with plenty to gain by amplifying Doomer worries have been instrumental in elevating them. Leaders from OpenAI, Google DeepMind and Anthropic, for instance, signed a statement putting AI extinction risk on the same plane as nuclear war and pandemics. Perhaps they’re not consciously attempting to block competition, but they can’t be that upset it might be a byproduct.

AI Alarmism Spurs Restrictive Government Policies

Because all this alarmism makes politicians feel compelled to do something, leading to proposals for strict government oversight that could restrict AI development outside a few firms. Intense government involvement in AI research would help big companies, which have compliance departments built for these purposes. But it could be devastating for smaller AI startups and open-source developers who don’t have the same luxury.

 

Doomer Rhetoric: Big Tech’s Unlikely Ally

“There’s a possibility that AI doomers could be unintentionally aiding big tech firms,” Garry Tan, CEO of startup accelerator Y Combinator, told me. “By pushing for heavy regulation based on fear, they give ammunition to those attempting to create a regulatory environment that only the biggest players can afford to navigate, thus cementing their position in the market.”

Ng took it a step further. “There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction,” he told the Australian Financial Review.

Doomers’ AI Fears Lack Substance

The AI Doomers’ worries, meanwhile, feel pretty thin. “I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably — and then kill us,” Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, told a rapt audience at TED this year. He confessed he didn’t know how or why an AI would do it. “It could kill us because it doesn’t want us making other superintelligences to compete with it,” he offered.

Bankman Fried Scandal Should Ignite Skepticism

After Sam Bankman Fried ran off with billions while professing to save the world through “effective altruism,” it’s high time to regard those claiming to improve society while furthering their business aims with relentless skepticism. As the Doomer narrative presses on, it threatens to rhyme with a familiar pattern.

AI Fear Tactics Threaten Open-Source Movement

Big Tech companies already have a significant lead in the AI race via cloud computing services that they lease out to preferred startups in exchange for equity. Further advantaging them might hamstring the promising open-source AI movement — a crucial area of competition — to the point of obsolescence. That’s probably why you’re hearing so much about AI destroying the world. And why it should be considered with a healthy degree of caution.

Source: AI Doomsayers: Debunking the Despair

EU’s top court rules in favour of Amazon and Luxembourg in €250m tax dispute – Have a mailbox in Lux and a massive corp tax haven!

The Court of Justice in Luxembourg ruled on Thursday against the appeal of the EU Commission. The Commission challenged a 2021 decision of the General Court of the European Union, which annulled the Commission’s illegal state aid charges against Amazon.

[…]

In a statement from October 2017, the EU Commission concluded that Luxembourg granted undue tax benefits to the online sales giant by allowing it to shift profits to a tax-exempt company, Amazon Europe Holding Technologies.

[…]

Back in 2003, the Grand Duchy accepted Amazon’s proposal on the tax treatment of two of its Luxembourg-based subsidiaries, allowing Amazon to shift profits from Amazon EU, which is subject to tax, to a tax-exempt company, Amazon Europe Holding Technologies.

After a three-year investigation launched in October 2014, the European Commission concluded in 2017 that the online sales giant received illegal tax benefits from Luxembourg.

[…]

The General Court ruled in 2021 that “Luxembourg had not granted a selective advantage in favour of that subsidiary”, annulling the EU Commission’s decision.

The Commission then submitted its appeal against the ruling of the EU’s lower court, which was now rejected by the Court of Justice, the EU’s top court. The verdict is another blow at the approach of Margrethe Vestager, who for a decade held the post of EU competition chief, also losing a landmark case contesting Apple’s tax regime in Ireland.

[…]

According to Matthias Kullas, Centre for European Policy expert on digital economy and fiscal policy, the ruling makes it more difficult for the Commission to take action against the aggressive tax planning of large digital companies.

“Aggressive tax planning means that taxes are no longer paid where economic value is generated. Instead, companies are established where taxes are low,” Kullas told Euractiv.

Companies with aggressive tax planning reduce their participation in financing public goods in the market. Yet, proportionate participation would only be fair, as these companies likewise benefit from public goods, including education and the administration of justice, Kullas explained.

“Against this backdrop, the minimum taxation that will apply in the EU from 2024 is a step in the right direction but does not solve the problem,” Kullas added.

For Chiara Putaturo, Oxfam EU tax expert, the EU tax rules do not work for the people but benefit the “super-rich and profit-hungry multinationals”.

[…]

“Profit-driven multinationals cannot continue to sidestep their tax bills by having a mailbox in countries like Luxembourg or Cyprus,” she added.

In November, the EU, US, and UK voted against the UN tax convention to fight tax evasion and illicit financial flows, arguing that the Convention would be a duplication of the OECD’s work on tax transparency.

Source: EU’s top court rules in favour of Amazon in €250m tax dispute – EURACTIV.com

Yoga nidra might be a path to better sleep and improved memory

Practicing yoga nidra — a kind of mindfulness training — might improve sleep, cognition, learning, and memory, even in novices, according to a pilot study publishing in the open-access journal PLOS ONE on December 13 by Karuna Datta of the Armed Forces Medical College in India, and colleagues. After a two-week intervention with a cohort of novice practitioners, the researchers found that the percentage of delta-waves in deep sleep increased and that all tested cognitive abilities improved.

Unlike more active forms of yoga, which focus on physical postures, breathing, and muscle control, yoga nidra guides people into a state of conscious relaxation while they are lying down. While it has reported to improve sleep and cognitive ability, those reports were based more on subjective measures than on objective data. The new study used objective polysomnographic measures of sleep and a battery of cognitive tests. Measurements were taken before and after two weeks of yoga nidra practice, which was carried out during the daytime using a 20 minute audio recording.

Among other things, polysomnography measures brain activity to determine how long each sleep stage lasts and how frequently each stage occurs. After two weeks of yoga nidra, the researchers observed that participants exhibited a significantly increased sleep efficiency and percentage of delta-waves in deep sleep. They also saw faster responses in all cognitive tests with no loss in accuracy and faster and more accurate responses in tasks including tests of working memory, abstraction, fear and anger recognition, and spatial learning and memory tasks. The findings support previous studies which link delta-wave sleep to improved sleep quality as well as better attention and memory.

[…]

Source: Yoga nidra might be a path to better sleep and improved memory | ScienceDaily

Adaptive wax-motor roof tile can cut both heating and cooling costs

[…]

an adaptive tile, which when deployed in arrays on roofs, can lower heating bills in winter and cooling bills in summer, without the need for electronics.

“It switches between a heating state and a cooling state, depending on the temperature of the tile,” said Xiao, the lead author of the study. “The target temperature is about 65° F — about 18° C.”

[…]

It wasn’t until Xiao’s idea of using a wax motor that the idea of adaptive roof tiles took its final shape. Based on the change in the volume of wax in response to temperatures it is exposed to, a wax motor creates pressure that moves mechanical parts, translating thermal energy into mechanical energy. Wax motors are commonly found in various appliances such as dishwashers and washing machines, as well in more specialized applications, such as in the aerospace industry.

In the case of the tile, the wax motor, depending on its state, can push or retract pistons that close or open louvers on the tile’s surface. So, in cooler temperatures, while the wax is solid, the louvers are closed and lay flat, exposing a surface that absorbs sunlight and minimizes heat dissipation through radiation.

But as soon as the temperatures reach around 18° C, the wax begins to melt and expand, pushing the louvers open and exposing a surface that reflects sunlight and emits heat.

In addition, during the melting or freezing process, the wax also absorbs or releases a large amount of heat, further stabilizing the temperature of the tile and the building.

“So we have a very predictable switching behavior that works within a very tight band,” Xiao explained. According to the researchers’ paper, testing has demonstrated a reduction in energy consumption for cooling by 3.1x and heating by 2.6x compared with non-switching devices covered with conventional reflective or absorbing coatings. Because of the wax motor, no electronics, batteries or external power sources are required to operate the device, and unlike other similar technologies, it is responsive within a few degrees of its target range.

[…]

Source: This adaptive roof tile can cut both heating and cooling costs | ScienceDaily