Hospitals owned by private equity are harming patients, reports find

Private equity firms are increasingly buying hospitals across the US, and when they do, patients suffer, according to two separate reports. Specifically, the equity firms cut corners, slash services, lay off staff, lower quality of care, take on substantial debt, and reduce charity care, leading to lower ratings and more medical errors, the reports collectively find.

Last week, the financial watchdog organization Private Equity Stakeholder Project (PESP) released a report delving into the state of two of the nation’s largest hospital systems, Lifepoint and ScionHealth—both owned by private equity firm Apollo Global Management. Through those two systems, Apollo runs 220 hospitals in 36 states, employing around 75,000 people.

The report found that some of Apollo’s hospitals were among the worst in their respective states, based on a ranking by The Lown Institute Hospital Index. The index ranks hospitals and health systems based on health equity, value, and outcomes, PESP notes. The hospitals also have dismal readmission rates and government rankings. The Center for Medicare and Medicaid Services (CMS) ranks hospitals on a one- to five-star system, with the national average of 3.2 stars overall and about 30 percent of hospitals at two stars or below. Apollo’s overall average is 2.8 stars, with nearly 40 percent of hospitals at two stars or below.

Patterns

The other report, a study published in JAMA late last month, found that the rate of serious medical errors and health complications increases among patients in the first few years after private equity firms take over. The study examined Medicare claims from 51 private equity-run hospitals and 259 matched control hospitals.

Specifically, the study, led by researchers at Harvard University, found that patients admitted to private equity-owned hospitals had a 25 percent increase in developing hospital-acquired conditions compared with patients in the control hospitals. In private equity hospitals, patients experienced a 27 percent increase in falls, a 38 percent increase in central-line bloodstream infections (despite placing 16 percent fewer central lines than control hospitals), and surgical site infections doubled.

“These findings heighten concerns about the implications of private equity on health care delivery,” the authors concluded.

It also squares with PESP’s investigation, which collected various data and media reports that could help explain how those medical errors could happen. The report found a pattern of cost-cutting and staff layoffs after private equity acquisition. In 2020, for instance, Lifepoint cut its annual salary and benefit costs by $166 million over the previous year and cut its supply costs by $54 million. Staff that remained at Apollo’s hospitals were, in some cases, underpaid, and some hospitals cut services, including obstetric, pediatric, and psychiatric care.

Another pattern was that Apollo’s hospitals were highly indebted. According to Moody’s Investor Services, Apollo’s ScionHealth has 5.8 times more debt than income to pay that debt off. Lifepoint’s debt was 7.9 times its income. Private equity firms often take on excessive debt for leveraged buyouts, but this can lead cash to be diverted to interest payments instead of operational needs, PESP reported.

Apollo also made money off the hospitals in sale-leaseback transactions, in which it sold the land under the hospitals and then leased it back. In these cases, hospitals are left paying rent on land they used to own.

[…]

Source: Hospitals owned by private equity are harming patients, reports find | Ars Technica

Ancient cities discovered in the Amazon are the largest yet found

Aerial surveys have revealed the largest pre-colonial cities in the Amazon yet discovered, linked by an extensive network of roads.

“The settlements are much bigger than others in the Amazon,” says Stéphen Rostain at the French National Center for Scientific Research in Paris. “They are comparable with Maya sites.”

What’s more, at between 3000 and 1500 years old, these cities are also older than other pre-Columbian ones discovered in the Amazon. Why the people who built them disappeared isn’t clear.

It is often assumed that the Amazon rainforest was largely untouched by humans before the Italian explorer Christopher Columbus reached the Americas in the 15th century. In fact, the first Europeans reported seeing many farms and towns in the region.

These reports, long dismissed, have in recent decades been backed up by discoveries of ancient earthworks and extensive dark soils created by farmers. One estimate puts the pre-Columbian population of the Amazon as high as 8 million.

[…]

In 2015, Rostain’s team did an aerial survey with lidar, a laser scanning technique that can create a detailed 3D map of the surface beneath most vegetation, revealing features not normally visible to us. The findings, which have only now been published, show that the settlements were far more extensive than anyone realised.

The survey revealed more than 6000 raised earthen platforms within an area of 300 square kilometres. These are where wooden buildings once stood – excavations have revealed post holes and fireplaces on these structures.

[…]

The survey also revealed a network of straight roads created by digging out soil and piling it on the sides. The longest extends for at least 25 kilometres, but might continue beyond the area that was surveyed.

[…]

“This is the largest complex with large settlements so far found in Amazonia,” says Charles Clement at the National Institute of Amazonian Research in Manaus, Brazil.

What’s more, it was found in a region of the Amazon that other researchers had concluded was sparsely inhabitated during pre-Columbian times, says Clement.

 

Journal reference:

Science DOI: 10.1126/science.adi6317

Source: Ancient cities discovered in the Amazon are the largest yet found | New Scientist

eBay Sent Critics a Bloody Pig Mask and more. Now It’s Paying a $3 Million Fine

eBay agreed to pay out a $3 million fine—the maximum criminal penalty—over a twisted scandal that saw top executives and other employees stalking a couple in Massachusetts who published a newsletter that criticized the company. The harassment campaign included online threats, sending employees to surveil the couple’s home, and mailing them disturbing objects—including live spiders and cockroaches, a bloody pig mask, and a book on recovering from the death of a spouse.

The Justice Department charged eBay with obstruction of justice, witness tampering, stalking through interstate travel, and stalking through online communication. eBay’s former security director James Baugh and former director of global resiliency David Harville are both serving jail time for their roles in the scheme.

[…]

The criminal activity seems to have started at the top of the company. In 2019, Ina Steiner published an article on the couple’s newsletter EcommerceBytes discussing a lawsuit eBay brought against Amazon. Half an hour later, eBay’s then-CEO Devin Wenig sent another executive a message saying: “If you are ever going to take her down…now is the time,” according to court documents. The message was forwarded to Baugh, who responded that Steiner was a “biased troll who needs to get BURNED DOWN.”

Wenig, who resigned later that year, denied any knowledge of the criminal activity and wasn’t charged with a crime. The Steiners are currently suing Wenig for his role in the campaign to “intimidate, threaten to kill, torture, terrorize, stalk and silence them.”

[…]

A total of seven eBay employees and contractors have been convicted for their involvement in stalking and harassing the Steiners, according to the Department of Justice. In addition to Baugh and Harville, the list includes Stephanie Popp and Philip Cooke, who were both sentenced to jail time in 2022. Stephanie Stockwell and Veronica Zea were each sentenced to one year of home confinement that same year. Brian Gilbert pleaded guilty and is currently awaiting sentencing.

Source: eBay Sent Critics a Bloody Pig Mask. Now It’s Paying a $3 Million Fine

23andMe tells victims it’s their fault that their data was breached. DNA data, it turns out, is extremely sensitive!

Facing more than 30 lawsuits from victims of its massive data breach, 23andMe is now deflecting the blame to the victims themselves in an attempt to absolve itself from any responsibility, according to a letter sent to a group of victims seen by TechCrunch.

“Rather than acknowledge its role in this data security disaster, 23andMe has apparently decided to leave its customers out to dry while downplaying the seriousness of these events,” Hassan Zavareei, one of the lawyers representing the victims who received the letter from 23andMe, told TechCrunch in an email.

In December, 23andMe admitted that hackers had stolen the genetic and ancestry data of 6.9 million users, nearly half of all its customers.

The data breach started with hackers accessing only around 14,000 user accounts. The hackers broke into this first set of victims by brute-forcing accounts with passwords that were known to be associated with the targeted customers, a technique known as credential stuffing.

From these 14,000 initial victims, however, the hackers were able to then access the personal data of the other 6.9 million victims because they had opted-in to 23andMe’s DNA Relatives feature. This optional feature allows customers to automatically share some of their data with people who are considered their relatives on the platform.

In other words, by hacking into only 14,000 customers’ accounts, the hackers subsequently scraped personal data of another 6.9 million customers whose accounts were not directly hacked.

But in a letter sent to a group of hundreds of 23andMe users who are now suing the company, 23andMe said that “users negligently recycled and failed to update their passwords following these past security incidents, which are unrelated to 23andMe.”

“Therefore, the incident was not a result of 23andMe’s alleged failure to maintain reasonable security measures,” the letter reads.

Zavareei said that 23andMe is “shamelessly” blaming the victims of the data breach.

[…]

“The breach impacted millions of consumers whose data was exposed through the DNA Relatives feature on 23andMe’s platform, not because they used recycled passwords. Of those millions, only a few thousand accounts were compromised due to credential stuffing. 23andMe’s attempt to shirk responsibility by blaming its customers does nothing for these millions of consumers whose data was compromised through no fault of their own whatsoever,” said Zavareei.

[…]

In an attempt to pre-empt the inevitable class action lawsuits and mass arbitration claims, 23andMe changed its terms of service to make it more difficult for victims to band together when filing a legal claim against the company. Lawyers with experience representing data breach victims told TechCrunch that the changes were “cynical,” “self-serving” and “a desperate attempt” to protect itself and deter customers from going after the company.

Clearly, the changes didn’t stop what is now a flurry of class action lawsuits.

Source: 23andMe tells victims it’s their fault that their data was breached | TechCrunch

Twitch Is Being American Strange and Bans Implied Nakedness In Response To ‘Nudity Meta’

As December 2023 was underway, some streamers cleverly thought to play around with Twitch’s restrictions around nudity, broadcasting in such a fashion that implied they were completely naked on camera. Twitch, in response, began banning folks before shifting gears to allow various forms of “artistic nudity” to proliferate on the platform. However, after immediately rescinding the decision and expressing that being naked while livestreaming is a no-no, the company is now making it clear that implied nudity is also forbidden, and that anyone who tries to circumvent the rules will face disciplinary action.

In a January 3 blog post, the company laid out the new guidelines regarding implied nudity on the platform, which is now prohibited effective immediately. Anyone who shows skin that the rules deem should be covered—think genitals, nipples “for those who present as women,” and the like—will face “an enforcement action,” though Twitch didn’t specify what that means. So, if you’re wearing sheer or partially see-through clothing, or use black bars to cover your private parts, then you’re more than likely to get hit with some sort of discipline.

“We don’t permit streamers to be fully or partially nude, including exposing genitals or buttocks. Nor do we permit streamers to imply or suggest that they are fully or partially nude, including, but not limited to, covering breasts or genitals with objects or censor bars,” the company said in the blog post. “We do not permit the visible outline of genitals, even when covered. Broadcasting nude or partially nude minors is always prohibited, regardless of context. For those who present as women, we ask that you cover your nipples and do not expose underbust. Cleavage is unrestricted as long as these coverage requirements are met and it is clear that the streamer is wearing clothing. For all streamers, you must cover the area extending from your hips to the bottom of your pelvis and buttocks.”

[…]

At the beginning of December, some streamers, including Morgpie and LivStixs, began broadcasting in what appeared to be the complete nude. In actuality, these content creators were implying nudity by positioning their cameras at the right angle so as to show plenty of unobscured cleavage but keep nipples out of sight. “Artistic nudity” is what it was called and, as the meta took over the platform, Twitch conceded, allowing such nakedness to proliferate all over livestreams.

[…]

Company CEO Dan Clancy said on December 15 that “depictions of real or fictional nudity won’t be allowed on Twitch, regardless of the medium.” He also apologized for the confusion this whole situation has caused, saying that part of Twitch’s job is “to make adjustments that serve the community.” So be careful, streamers. If you show up nude on the platform, Twitch will come for you.

Source: Twitch Bans Implied Nakedness In Response To ‘Nudity Meta’

What is wrong with these people?! If you don’t want to see (almost) nudity, you can always just change channel!

Novel helmet liner 30 times better at stopping concussions

[…]

Among sportspeople and military vets, traumatic brain injury (TBI) is one of the major causes of permanent disability and death. Injury statistics show that the majority of TBIs, of which concussion is a subtype, are associated with oblique impacts, which subject the brain to a combination of linear and rotational kinetic energy forces and cause shearing of the delicate brain tissue.

To improve their effectiveness, helmets worn by military personnel and sportspeople must employ a liner material that limits both. This is where researchers from the University of Wisconsin-Madison come in. Determined to prevent – or lessen the effect of – TBIs caused by knocks to the body and head, they’ve developed a new lightweight foam material for use as a helmet liner.

[…]

For the current study, Thevamaran built upon his previous research into vertically aligned carbon nanotube (VACNT) foams – carefully arranged layers of carbon cylinders one atom thick – and their exceptional shock-absorbing capabilities. Current helmets attempt to reduce rotational motion by allowing a sliding motion between the wearer’s head and the helmet during impact. However, the researchers say this movement doesn’t dissipate energy in shear and can jam when severely compressed following a blow. Instead, their novel foam doesn’t rely on sliding layers.

Oblique impacts subject the brain to a combination of linear and rotational shear force
Oblique impacts, associated with the majority of TBIs, subject the brain to a combination of linear and rotational shear forces
Maheswaran et al.

VACNT foam sidesteps this shortcoming via its unique deformation mechanism. Under compression, the VACNTs undergo collective sequentially progressive buckling, from increased compliance at low shear strain levels to a stiffening response at high strain levels. The formed compression buckles unfold completely, enabling the VACNT foam to accommodate large shear strains before returning to a near initial state when the load is removed.

The researchers found that at 25% precompression, the foam exhibited almost 30 times higher energy dissipation in shear – up to 50% shear strain – than polyurethane-based elastomeric foams of similar density.

[…]

The study was published in the journal Experimental Mechanics.

Source: University of Wisconsin-Madison

 

Source: Novel helmet liner 30 times better at stopping concussions

People discussing Assisted Dying (Euthanasia) in the UK – apparently it’s still illegal there

Dame Esther Rantzen says a free vote on assisted dying would be top of the agenda if she were PM for a day.

“I think it’s important that the law catches up with what the country wants,” the veteran broadcaster told Radio 4’s Today podcast.

Earlier this year, the 83-year-old announced she had been diagnosed with stage four lung cancer.

Dame Esther told the BBC she is currently undergoing a “miracle” treatment to combat the disease.

However, if her next scan shows the medication is not working “I might buzz off to Zurich”, where assisted dying is legal and she has joined the Dignitas clinic, she said.

She said this decision could be driven in part by her wish that her family’s “last memories of me” are not “painful because if you watch someone you love having a bad death, that memory obliterates all the happy times”.

Source: Dame Esther Rantzen: ‘If I were PM, we would vote on assisted dying’ – BBC News

What civilised country doesn’t allow euthanasia? It’s like a 1970s country where being gay is still illegal. Climb up out of your Brexit inflicted stone age, Britain!

European Commission agrees to new rules that will protect gig workers rights – hopefully in ~2 years they will get the rights they need

Gig workers in the EU will soon get new benefits and protections, making it easier for them to receive employment status. Right now, over 500 digital labor platforms are actively operating in the EU, employing roughly 28 million platform workers. The new rules follow agreements made between the European Parliament and the EU Member States, after policies were first proposed by the European Commission in 2021.

The new rules highlight employment status as a key issue for gig workers, meaning an employed individual can reap the labor and social rights associated with an official worker title. This can include things like a legal minimum wage, the option to engage in collective bargaining, health protections at work, options for paid leave and sick days. Through a recognition of a worker status from the EU, gig workers can also qualify for unemployment benefits.

Given that most gig workers are employed by digital apps, like Uber or Deliveroo, the new directive will require “human oversight of the automated systems” to make sure labor rights and proper working conditions are guaranteed. The workers also have the right to contest any automated decisions by digital employers — such as a termination.

The new rulings will also require employers to inform and consult workers’ when there are “algorithmic decisions” that affect them. Employers will be required to report where their gig workers are fulfilling labor-related tasks to ensure the traceability of employees, especially when there are cross-border situations to consider in the EU.

Before the new gig worker protections can formally roll out, there needs to be a final approval of the agreement by the European Parliament and the Council. The stakeholders will have two years to implement the new protections into law. Similar protections for gig workers in the UK were introduced in 2021. Meanwhile, in the US, select cities have rolled out minimum wage rulings and benefits — despite Uber and Lyft’s pushback against such requirements.

Source: European Commission agrees to new rules that will protect gig workers rights

The EU works good stuff but at a snails pace.

Mind-reading AI can translate brainwaves into written text

Using only a sensor-filled helmet combined with artificial intelligence, a team of scientists has announced they can turn a person’s thoughts into written words.

In the study, participants read passages of text while wearing a cap that recorded electrical brain activity through their scalp. These electroencephalogram (EEG) recordings were then converted into text using an AI model called DeWave.

Chin-Teng Lin at the University of Technology Sydney (UTS), Australia, says the technology is non-invasive, relatively inexpensive and easily transportable.

While the system is far from perfect, with an accuracy of approximately 40 per cent, Lin says more recent data currently being peer-reviewed shows an improved accuracy exceeding 60 per cent.

In the study presented at the NeurIPS conference in New Orleans, Louisiana, participants read the sentences aloud, even though the DeWave program doesn’t use spoken words. However, in the team’s latest research, participants read the sentences silently.

Last year, a team led by Jerry Tang at the University of Texas at Austin reported a similar accuracy in converting thoughts to text, but MRI scans were used to interpret brain activity. Using EEG is more practical, as subjects don’t have to lie still inside a scanner.

[…]

Source: Mind-reading AI can translate brainwaves into written text | New Scientist

Ultrasound Enables Remote 3-D Printing–Even in the Human Body

Mechanical engineers Shervin Foroughi and Mohsen Habibi were painstakingly maneuvering a tiny ultrasound wand over a pool of liquid when they first saw an icicle shape emerge and solidify.

[…]

Most commercial forms of 3-D printing involve extruding fluid materials—plastics, ceramics, metals or even biological compounds—through a nozzle and hardening them layer-by-layer to form computer-drafted structures. That hardening step is key, and it relies on energy in the form of light or heat.

[…]

Using ultrasound to trigger chemical reactions in room-temperature liquids isn’t new in itself. The field of sonochemistry and its applications, which matured in the 1980s at the University of Illinois Urbana-Champaign (UIUC), relies on a phenomenon called acoustic cavitation. This happens when ultrasonic vibrations create tiny bubbles, or cavities, within a fluid. When these bubbles collapse, the vapors inside them generate immense temperatures and pressures; this applies rapid heating at minuscule, localized points.

[…]

In their experiments, which were published in Nature Communications in 2022, the researchers filled a cylindrical, opaque-shelled chamber with a common polymer (polydimethylsiloxane, or PDMS) mixed with a curing agent. They submerged the chamber in a tank of water, which served as a medium for the sound waves to propagate into the chamber (similar to the way ultrasound waves from medical imaging devices travel through gel spread on a patient’s skin). Then, using a biomedical ultrasound transducer mounted to a computer-controlled motion manipulator, the scientists traced the ultrasound beam’s focal point along a calculated path 18 millimeters deep into the liquid polymer. Tiny bubbles started to appear in the liquid along the transducer’s path, and solidified material quickly followed. After fastidiously trying many combinations of ultrasound frequencies, liquid viscosity and other parameters, the team finally succeeded in using the approach to print maple-leaf shapes, seven-toothed gears and honeycomb structures within the liquid bath. The researchers then repeated these experiments using various polymers and ceramics, and they presented their results at the Canadian Acoustical Association’s annual conference this past October.

[…]

A crucial next step for sound-based printing would be to show how this process can function in real applications that meet the strict requirements of engineers and product designers, such as materials strength, surface finish and repeatability.

The research team will soon publish new work that discusses improvements in printing speed and, significantly, resolution. In the 2022 paper the team demonstrated the ability to print “pixels” that measure 100 microns on a side. In comparison, traditional 3-D printing can achieve pixels half that size.

[…]

Source: Ultrasound Enables Remote 3-D Printing–Even in the Human Body | Scientific American

Your Organs Might Be Aging at Different Rates

The number of birthdays you’ve had—better known as your chronological age—now appears to be less important in assessing your health than ever before. A new study shows that bodily organs get “older” at extraordinarily different rates, and each one’s biological age can be at odds with a person’s age on paper.

[…]

The team sampled the blood of more than 5,500 people, all with no active disease or clinically abnormal biomarkers, to look for proteins that originated from specific organs. The scientists were able to determine where those proteins came from by measuring their gene activity: when genes for a protein were expressed four times more in one organ, that designated its origin. Next the team measured the concentrations of thousands of proteins in a drop of blood and found that almost 900 of them—about 18 percent of the proteins measured—tended to be specific to a single organ. When those proteins varied from the expected concentration for a particular chronological age, that indicated accelerated aging in the corresponding organ.

“We could say with reasonable certainty that [a particular protein] likely comes from the brain and somehow ends up in the blood,” explains Tony Wyss-Coray, a professor of neurology at Stanford University and co-author of the new study. If that protein concentration changes in the blood, “it must also likely change in the brain—and [that] tells us something about how the brain ages,” Wyss-Coray says.

By comparing study participants’ organ-specific proteins, the researchers were able to estimate an age gap—the difference between an organ’s biological age and its chronological age. Depending on the organ involved, participants found to have at least one with accelerated aging had an increased disease and mortality risk over the next 15 years. For example, those whose heart was “older” than usual had more than twice the risk of heart failure than people with a typically aging heart. Aging in the heart was also a strong predictor of heart attack. Similarly, those with a quickly aging brain were more likely to experience cognitive decline. Accelerated aging in the brain and vascular system predicted the progression of Alzheimer’s disease just as strongly as plasma pTau-181the current clinical blood biomarker for the condition. Extreme aging in the kidneys was a strong predictor of hypertension and diabetes.

[…]

Wyss-Coray anticipates this research could lead to a simple blood test that could guide prognostic work—in other words, a test that could help foretell future illness. “You could start to do interventions before that person develops disease,” he says, “and potentially reverse this accelerating aging or slow it down.”

[…]

The momentum of commercial epigenetic testing is a “gold rush,” Shiels says. “There is a degree of oversell on what [the tests] can do.”

A single organ doesn’t tell the whole story of aging because deterioration processes are interconnected and affect an entire organism. “We understand a lot about the aging process on sort of a micro level,” Shiels says. “But a lot of the factors that drive age-related organ dysfunction are environmental. So it’s lifestyle, pollution, what you eat, microbes in your gut.”

[…]

Source: Your Organs Might Be Aging at Different Rates | Scientific American

Brazillian city enacts an ordinance that was written by ChatGPT – might be first law entered by AI

City lawmakers in Brazil have enacted what appears to be the nation’s first legislation written entirely by artificial intelligence — even if they didn’t know it at the time.

The experimental ordinance was passed in October in the southern city of Porto Alegre and city councilman Ramiro Rosário revealed this week that it was written by a chatbot, sparking objections and raising questions about the role of artificial intelligence in public policy.

Rosário told The Associated Press that he asked OpenAI’s chatbot ChatGPT to craft a proposal to prevent the city from charging taxpayers to replace water consumption meters if they are stolen. He then presented it to his 35 peers on the council without making a single change or even letting them know about its unprecedented origin.

“If I had revealed it before, the proposal certainly wouldn’t even have been taken to a vote,” Rosário told the AP by phone on Thursday. The 36-member council approved it unanimously and the ordinance went into effect on Nov. 23.

“It would be unfair to the population to run the risk of the project not being approved simply because it was written by artificial intelligence,” he added.

[…]

“We want work that is ChatGPT generated to be watermarked,” he said, adding that the use of artificial intelligence to help draft new laws is inevitable. “I’m in favor of people using ChatGPT to write bills as long as it’s clear.”

There was no such transparency for Rosário’s proposal in Porto Alegre. Sossmeier said Rosário did not inform fellow council members that ChatGPT had written the proposal.

Keeping the proposal’s origin secret was intentional. Rosário told the AP his objective was not just to resolve a local issue, but also to spark a debate. He said he entered a 49-word prompt into ChatGPT and it returned the full draft proposal within seconds, including justifications.

[…]

And the council president, who initially decried the method, already appears to have been swayed.

“I changed my mind,” Sossmeier said. “I started to read more in depth and saw that, unfortunately or fortunately, this is going to be a trend.”

Source: Brazillian city enacts an ordinance that was written by ChatGPT | AP News

Your Tastebuds Help Tell You When to Stop Eating, New Research Suggests

Our mouths might help keep our hunger in check. A recent study found evidence in mice that our brains rely on two separate pathways to regulate our sense of fullness and satiety—one originating from the gut and the other from cells in the mouth that let us perceive taste. The findings could help scientists better understand and develop anti-obesity drugs, the study authors say.

The experiment was conducted by researchers at the University of California San Francisco. They were hoping to definitively answer one of the most important and basic questions about our physiology: What actually makes us want to stop eating?

It’s long been known that the brainstem—the bottom part of the brain that controls many subconscious body functions—also helps govern fullness. The current theory is that neurons in the brainstem respond to signals from the stomach and gut as we’re eating a meal, which then trigger that feeling of having had enough. But scientists have only been able to indirectly study this process until now, according to lead author Zachary Knight, a UCSF professor of physiology in the Kavli Institute for Fundamental Neuroscience. His team was able to directly image and record the fullness-related neurons in the brainstem of alert mice right as they were chowing down.

“Our study is the first to observe these neurons while an animal eats,” Knight told Gizmodo in an email. “We found surprisingly that many of these cells respond to different signals and control feeding in different ways than was widely assumed.”

The team focused on two types of neurons in the brainstem thought to regulate fullness: prolactin-releasing hormone (PRLH) neurons and GCG neurons.

When they fed mice through the stomach alone, they found that PRLH neurons were activated by the gut, as expected by prior assumptions. But when the mice ate normally, these gut signals disappeared; instead, the PRLH neurons were almost instantly activated by signals from the mouth, largely from the parts responsible for taste perception. Minutes later, the GCG neurons were activated by gut signals.

The team’s findings, published Wednesday in Nature, indicate that there are two parallel tracks of satiety in the brainstem, ones that operate at different speeds with slightly different purposes.

“We found that the first pathway—which controls how fast you eat and involves PRLH neurons—is unexpectedly activated by the taste of food,” Knight said. “This was surprising, because we all know that tasty food causes us to eat more. But our findings reveal that food tastes also function to limit the pace of ingestion, through a brainstem pathway that likely functions beneath the level of our conscious awareness.”

The second pathway, governed by the gut and GCG neurons, seems to control how much we ultimately eat, Knight added.

Mice are not humans, of course. So more research will be needed to confirm whether we have a similar system.

[…]

Source: Your Tastebuds Help Tell You When to Stop Eating, New Research Suggests

Aging (for men) – what nobody told you: pee slippers

“The 100-year-old man set off in his pee-slippers (so called because men of an advanced age rarely pee farther than their shoes),”

― Jonas Jonasson, The 100-Year-Old Man Who Climbed Out the Window and Disappeared

Guys, as you get older your bladder power goes down. This has some consequences – you don’t pee very far and you don’t empty out fully after pissing, which leads to drippage in your underwear. You wake up (once, twice, three times) per night and go to the bathroom now. If you search up this kind of stuff, chances are you will have found overly serious conditions such as “Urinary Retention”, “Urinary Incontinence”, “Overflow Incontincence”, “Bladder Outlet Obstruction”, “Benign prostatic hyperplasia (BPH)”, “blood and or cloudy urine”, “Nocturia” and all kinds of other nasties. This is not about that. This is about some of the better tips I have found to handle this dripping life we have now found ourselves in.

TL;DR

You get old and your muscles get weaker, you can hold less and your piss tube gets blocked. You have to handle your pissing, so drink less before you travel or need to be somewhere, especially caffeinated drinks. Double void, lean forward and whistle to empty your bladder. Do pelvic floor muscles (kegels) for more control. After you piss, milk your piss tube (ureter) behind your balls quickly to empty the tube out. Put your legs up before sleeping and try to go to bed at the same time every night.

So what exactly happens to you as you get older?

As you age the whole system around your piss (the kidneys, bladder, ureter [=piss tube], prostrate) change naturally. The kidney becomes lighter and can’t filter as much blood. The arteries supplying blood to the kidneys narrow. For women the piss tube shortens (theirs is called the urethra) and becomes thinner, which increases the risk of being unable to piss, but for men this doesn’t change. The prostrate gland can grow though and can block your piss tube. All your life, your bladder muscles contract without you actually needing to pee, but these contractions are blocked by your spinal chord and brain controls. As you get older your system stops blocking these contractions, leading to more urine left in the bladder after you have taken a piss and you need to go more often. Not only that, but the muscles themselves weaken. The bladder wall itself becomes less elastic and so less able to hold much pee.

Further reading: Effects of Aging on the Urinary Tract – MSD Manual 2022 / Aging changes in the kidneys and bladder – Medline Plus (National Library of Medicine) 2022 / The Aging Bladder – National Library of Medicine, National Center for Biotechnology Information (2004)

Some actually useful tips for people who are just aging and not seriously ill

Medication use: Alter use of medications that could worsen urinary symptoms.

  • Talk to your doctor or pharmacist about prescription or over-the-counter medications that may be contributing to your BPH symptoms. Antihistamines and decongestants can cause problems for some.
  • If you use medications that could make you urinate more, don’t take them right before driving, traveling, attending an event, or going to bed.
  • Don’t rely on ineffective dietary supplements. Saw palmetto and other herbal supplements have failed rigorous scientific testing so far.

Fluid restriction: Change how much fluid you drink — and when — to prevent bothersome bathroom visits.

  • Don’t drink liquids before driving, traveling, or attending events where finding a bathroom quickly could be difficult.
  • Avoid drinking caffeinated or alcoholic beverages after dinner or within two hours of your bedtime.

Bladder habits: Change the timing and manner in which you empty your bladder to reduce symptoms or make them less disruptive.

  • Don’t hold it in; empty your bladder when you first get the urge.
  • When you are out in public, go to the bathroom and try to urinate when you get the chance, even if you don’t feel a need right then.
  • Take your time when urinating so you empty your bladder as much as possible.
  • Double void: After each time you urinate, try again right away.
  • On long airplane flights, avoid drinking alcohol, and try to urinate every 60 to 90 minutes.

Try these techniques to relieve common urinary symptoms without medication

  • Timed voids. Urinate at least every three to four hours. Never hold the urine.
  • Double void. Before leaving the restroom, try to empty your bladder a second time. Focuson relaxing the muscles of the pelvic floor. You may try running your hands under warm waterbefore your second void to trigger a relaxation response.
  • Drink plenty of fluids. Fluids keep the urinary tract hydrated and clean.
  • Have a bowel movement every day. The rectum is just behind the bladder. If it is a full, it can prevent the bladder from functioning properly. Increase your fruit, fiber, water and walkinguntil you have soft bowel movements and don’t have to strain. You may add over the counter medications like Senna (Sennakot, SennaGen), Colace (docusate) or Dulcolax (bisacodyl).
  • Comfort and privacy are necessary to empty completely. Give yourself time to go.
  • Leaning forward (and rocking) may promote urination.After you have finished passing urine, squeeze the pelvic floor to try to completely empty.
  • The sound of water can promote the bladder muscle to contract, but care should be taken
  • not to promote bladder muscle instability with overuse of this technique.
  • Tapping over the bladder may assist in triggering a contraction in some people.
  • Stroking or tickling the lower back may stimulate urination and has been reported to be helpful in some patients.
  • Whistling provides a sustained outward breath with a gentle increase in pressure in the abdomen that may help with emptying your bladder.
  • General relaxation techniques can help people who are tense and anxious about theircondition.
Techniques for Complete Bladder Emptying – Urology Group Virginia

Pelvic floor exercises

The pelvic floor consists of layers of muscles and ligaments that stretch like a hammock, from the
pubic bone at the front to the tip of the back bone, that help to support your bladder and bowel.
Pelvic floor exercises can be done in different positions:
• In a standing position, stand with your feet apart and tighten your pelvic floor muscles as if
you were trying to avoid breaking wind. If you look in a mirror you should see the base of your
penis move nearer to your abdomen and your testicles rise. Hold the contraction as strongly
as you can without holding your breath or tensing your buttocks. Perform this three times (as
strong as possible) in the morning, holding each for up to 10 seconds – and three times (as
strong as possible) in the evening, holding each for up to 10 seconds.
• In a sitting position, sit on a chair with your knees apart and tighten your pelvic floor muscles
as if you were trying to avoid breaking wind. Hold the contraction as strongly as you can
without holding your breath or tensing your buttocks. Perform this three times (as strong as
possible) in the morning, holding each for up to 10 seconds – and three times (as strong as
possible) in the evening, holding each for up to 10 seconds.
• In a lying position, lie on your back with your knees bent and your legs apart. Tighten your
pelvic floor muscles as if you were trying to avoid breaking wind and hold the contractions as
strongly as you can without holding your breath or tensing your buttocks. Perform this three
times (as strong as possible) in the morning, holding each for up to 10 seconds – and three
times (as strong as possible) in the evening, holding each for up to 10 seconds.
• While walking, tighten your pelvic floor muscles as you walk.
• After urinating and you have emptied your bladder, tighten your pelvic floor muscles as
strongly as you can to avoid an after dribble

Post micturition dribble exercise (dripping, drippage, dribbling after peeing)

• After passing urine, wait for a few seconds to allow the bladder to empty.
• Place your fingers behind the scrotum and apply gentle pressure to straighten out the urethra.
• Continue this whilst gently lifting and stroking to encourage the trapped urine to follow out.
• Before leaving the toilet, repeat the technique twice to ensure that the urethra is completely
empty.
This technique can easily be used at home. When in public toilets it can be done discreetly, with a
hand inside a trouser pocket.
It only takes a few seconds and will avoid the problem of stained trousers.
Pelvic floor exercises for men can also improve this problem as it improves the tone of your
muscles

Male pelvic floor exercises and post micturition dribble – NHS Western Isles 2022 (PDF)

Many men dribble urine shortly after they have finished using the toilet and the bladder feels empty. Even waiting a moment and shaking the penis before zipping up won’t stop it. The medical term for this is post-micturition dribbling. It’s common in older men because the muscles surrounding the urethra — the long tube in the penis that allows urine to pass out of the body — don’t squeeze as hard as they once did. This leaves a small pool of urine at a dip in the urethra behind the base of the penis. In less than a minute after finishing, this extra urine dribbles out.

Here’s a simple technique that should help. Right after your urine stream stops, “milk out” the last few drops of urine. Using the fingertips of one hand, begin about an inch behind your scrotum. Gently press upward. Keep applying this pressure as you move your fingers toward the base of the penis under the scrotum. Repeat once or twice. This should move the pooled urine into the penis. You can then shake out the last few drops. With practice, you should be able to do this quickly.

What can I do about urinary dribbling? – Men’s Health 2022

Kegel Exercises

Kegel exercises, also known as pelvic floor muscle exercises, are the easiest way for you to control urinary incontinence and stress incontinence, as they can be easily added to your daily routine.

To perform a Kegel exercise, you just need to squeeze your pelvic floor muscles. These are the same muscles you would use to stop the flow of urine.

Simply squeeze these muscles for 3 seconds and then relax. The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDKD) suggests building up to 10-15 repetitions, 3 times a day. You can do these pelvic floor exercises while sitting or lying down.

Bladder Training

This is an effective way to overcome overactive bladder symptoms and gain more bladder control. The exercise trains your bladder to hold more urine before needing to empty it.

First, you need to determine your baseline. Make a diary of how often you need to go to the bathroom throughout the day. Then try to go to the bathroom less often, holding in the urine longer between visits. It may feel uncomfortable, but doing this will help you gain more bladder control.

Bladder Exercises — How to Strengthen Bladder Muscles – Urology of Greater Atlanta

How to stop pissing in the middle of the night

  • Limit liquids before bedtime: Avoid drinking water or other beverages at night to reduce the need to wake up to urinate.
  • Reduce caffeine and alcohol intake: Caffeine can trigger the bladder to become overactive and produce too much urine.Reduce your intake of caffeine and alcoholic beverages later in the afternoon and evening. 
  • Talk to your doctor about when to take medications: Some medications, such as diuretics, can increase nighttime urination. Ask your doctor about the ideal time to take medications so they don’t interfere with your sleep.
  • Strengthen your pelvic floor: Doctors recommend pelvic floor muscle exercises to help strengthen key muscles and control your urinary symptoms. 
  • Elevate or compress your legs: Some research has shown that you can reduce fluid buildup that leads to urination by elevating your legs or using compression socks before bedtime.
  • Practice good sleep hygiene: Healthy sleep hygiene can help you get better rest. Doctors recommend relaxing before bed, going to bed at the same time every night, and making sure your sleep environment is quiet, dark, and comfortable.
Frequent Urination at Night (Nocturia) – Sleep doctor 2023

Ok fellas, so hopefully we will stop dribbling into our pants a bit more. If you have any tips to improve on this guide then I look forward to hearing from you!

Next we will be looking at sleeping issues. This is a subject that seems to have some kind of taboo on it, but once you raise it, you realise that loads of people suffer from them.

Researchers printed a robotic hand with bones, ligaments and tendons for the first time

Researchers at the Zurich-based ETH public university, along with a US-based startup called Inkbit, have done the impossible. They’ve printed a robot hand complete with bones, ligaments and tendons for the very first time, representing a major leap forward in 3D printing technology. It’s worth noting that the various parts of the hand were printed simultaneously, and not cobbled together after the fact, as indicated in a research journal published in Nature.

Each of the robotic hand’s various parts were made from different polymers of varying softness and rigidity, using a new laser-scanning technique that lets 3D printers create “special plastics with elastic qualities” all in one go. This obviously opens up new possibilities in the fast-moving field of prosthetics, but also in any field that requires the production of soft robotic structures.

Basically, the researchers at Inkbit developed a method to 3D print slow-curing plastics, whereas the technology was previously reserved for fast-curing plastics. This hybrid printing method presents all kinds of advantages when compared to standard fast-cure projects, such as increased durability and enhanced elastic properties. The tech also allows us to mimic nature more accurately, as seen in the aforementioned robotic hand.

“Robots made of soft materials, such as the hand we developed, have advantages over conventional robots made of metal. Because they’re soft, there is less risk of injury when they work with humans, and they are better suited to handling fragile goods,” ETH Zurich robotics professor Robert Katzschmann writes in the study.

A robot dog or a pulley or something.
ETH Zurich/Thomas Buchner

This advancement still prints layer-by-layer, but an integrated scanner constantly checks the surface for irregularities before telling the system to move onto the next material type. Additionally, the extruder and scraper have been updated to allow for the use of slow-curing polymers. The stiffness can be fine-tuned for creating unique objects that suit various industries. Making human-like appendages is one use case scenario, but so is manufacturing objects that soak up noise and vibrations.

MIT-affiliated startup Inkbit helped develop this technology and has already begun thinking about how to make money off of it. The company will soon start to sell these newly-made printers to manufacturers but will also sell complex 3D-printed objects that make use of the technology to smaller entities.

Source: Researchers printed a robotic hand with bones, ligaments and tendons for the first time

‘Super Melanin’ Speeds Healing, Stops Sunburn, and More

A team of scientists at Northwestern University has developed a synthetic version of melanin that could have a million and one uses. In new research, they showed that their melanin can prevent blistering and accelerate the healing process in tissue samples of freshly injured human skin. The team now plans to further develop their “super melanin” as both a medical treatment for certain skin injuries and as a potential sunscreen and anti-aging skincare product.

[…] Most people might recognize melanin as the main driver of our skin color, or as the reason why some people will tan when exposed to the sun’s harmful UV rays. But it’s a substance with many different functions across the animal kingdom. It’s the primary ingredient in the ink produced by squids; it’s used by certain microbes to evade a host’s immune system; and it helps create the iridescence of some butterflies. A version of melanin produced by our brain cells might even protect us from neurodegenerative conditions like Parkinson’s.

[…]

Their latest work was published Thursday in the Nature Journal npj Regenerative Medicine. In the study, they tested the melanin on both mice and donated human skin tissue samples that had been exposed to potentially harmful things (the skin samples were exposed to toxic chemicals, while the mice were exposed to chemicals and UV radiation). In both scenarios, the melanin reduced or even entirely prevented the damage to the top and underlying layers of skin that would have been expected. It seemed to do this mainly by vacuuming up the damaging free radicals generated in the skin by these exposures, which in turn reduced inflammation and generally sped up the healing process.

The team’s creation very closely resembles natural melanin, to the extent that it seems to be just as biodegradable and nontoxic to the skin as the latter (in experiments so far, it doesn’t appear to be absorbed into the body when applied topically, further reducing any potential safety risks). But the ability to apply as much of their melanin as needed means that it could help repair skin damage that might otherwise overwhelm our body’s natural supply. And their version has been tweaked to be more effective at its job than usual.

[…]

It could have military applications—one line of research is testing whether the melanin can be used as a protective dye in clothing that would absorb nerve gas and other environmental toxins.

[…]

On the clinical side, they’re planning to develop the synthetic melanin as a treatment for radiation burns and other skin injuries. And on the cosmetic side, they’d like to develop it as an ingredient for sunscreens and anti-aging skincare products.

[…]

all of those important mechanisms we’re seeing [from the clinical research] are the same things that you look for in an ideal profile of an anti-aging cream, if you will, or a cream that tries to repair the skin.”

[…]

Source: ‘Super Melanin’ Speeds Healing, Stops Sunburn, and More

AI Risks – doomsayers, warriors, reformers

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts, and the public declarations issued about AI are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.

The result is a cacophony of coded language, contradictory views, and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy, and even our daily lives.

These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays, or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view AI But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, you’ll realize this isn’t really a debate only about AI. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because they’re already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of AI to stay true to the humanity of our values.

One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different AI camps tend not to use the same words to describe their positions. One faction describes the dangers posed by AI through the framework of safety, another through ethics or integrity, yet another through security, and others through economics. By decoding who is speaking and how AI is being described, we can explore where these groups differ and what drives their views.

The Doomsayers

The loudest perspective is a frightening, dystopian vision in which AI poses an existential risk to humankind, capable of wiping out all life on Earth. AI, in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. AI could destroy humanity or pose a risk on par with nukes. If we’re not careful, it could kill everyone or enslave humanity. It’s likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earth’s resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.

These are the AI safety people, and their ranks include the “Godfathers of AI,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other AI tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like AI enslavement.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. It’s widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears “wishful worries”—that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”

More practically, many of the researchers in this group are proceeding full steam ahead in developing AI, demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming—the dangers will not be sudden and we will have time to change course. While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of AI and, most important, not allow them to strategically distract from more immediate concerns. Let’s not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.

The Reformers

While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanity’s worst instincts are encoded into and enforced by machines. The doomsayers think AI enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these AI ethics concerns—like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury, and Cathy O’Neil—have been raising the alarm on inequities coded into AI for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women, and people who identify as LGBTQ. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable AI, she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform AI in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside—or even above—their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the AI revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Google’s AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.

While doomsayers and reformers share the concern that AI must align with human interests, reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by AI misinformation, surveillance, and inequity. Integrity experts call for the development of responsible AI, for civic education to ensure AI literacy and for keeping humans front and center in AI systems.

This group’s concerns are well documented and urgent—and far older than modern AI technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that AI might kill us in the future should still demand that it not profile and exploit us in the present.

The Warriors

Other groups of prognosticators cast the rise of AI through the language of competitiveness and national security. One version has a post-9/11 ring to it—a world where terrorists, criminals, and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an AI arms race with China and its surveillance-rich society.

Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.

OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant AI companies, are pushing for AI regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading AI companies while restricting competition from start-ups. In the lobbying battles over Europe’s trailblazing AI regulatory framework, US megacompanies pleaded to exempt their general-purpose AI from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”

Any technology critical to national defense usually has an easier time avoiding oversight, regulation, and limitations on profit. Any readiness gap in our military demands urgent budget increases and funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Google’s former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in US national security concerns.

The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-twentieth century. AI research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.


As the science-fiction author Ted Chiang has said, fears about the existential risks of AI are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-up’s business plan. Cosma Shalizi and Henry Farrell further argue that “we’ve lived among shoggoths for centuries, tending to them as though they were our masters” as monopolistic platforms devour and exploit the totality of humanity’s labor and ingenuity for their own interests. This dread applies as much to our future with AI as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with AI, China, and the fights picked among robber barons.

By analogy to the healthcare sector, we need an AI public option to truly keep AI companies in check. A publicly directed AI development project would serve to counterbalance for-profit corporate AI and help ensure an even playing field for access to the twenty-first century’s key technology while offering a platform for the ethical development and use of AI.

Also, we should embrace the humanity behind AI. We can hold founders and corporations accountable by mandating greater AI transparency in the development stage, in addition to applying legal standards for actions associated with AI. Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness AI to accumulate much more or pursue extreme ideologies, let’s think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Source: AI Risks – Schneier on Security

AI and smart mouthguards: the new frontline in fight against brain injuries

There was a hidden spectator of the NFL match between the Baltimore Ravens and Tennessee Titans in London on Sunday: artificial intelligence. As crazy as it may sound, computers have now been taught to identify on-field head impacts in the NFL automatically, using multiple video angles and machine learning. So a process that would take 12 hours – for each game – is now done in minutes. The result? After every weekend, teams are sent a breakdown of which players got hit, and how often.

This tech wizardry, naturally, has a deeper purpose. Over breakfast the NFL’s chief medical officer, Allen Sills, explained how it was helping to reduce head impacts, and drive equipment innovation.

Players who experience high numbers can, for instance, be taught better techniques. Meanwhile, nine NFL quarterbacks and 17 offensive linemen are wearing position-specific helmets, which have significantly more padding in the areas where they experience more impacts.

What may be next? Getting accurate sensors in helmets, so the force of each tackle can also be estimated, is one area of interest. As is using biomarkers, such as saliva and blood, to better understand when to bring injured players back to action.

If that’s not impressive enough, this weekend rugby union became the first sport to adopt smart mouthguard technology, which flags big “hits” in real time. From January, whenever an elite player experiences an impact in a tackle or ruck that exceeds a certain threshold, they will automatically be taken off for a head injury assessment by a doctor.

No wonder Dr Eanna Falvey, World Rugby’s chief medical officer, calls it a “gamechanger” in potentially identifying many of the 18% of concussions that now come to light only after a match.

[…]

As things stand, World Rugby is adding the G-force and rotational acceleration of a hit to determine when to automatically take a player off for an HIA. Over the next couple of years, it wants to improve its ability to identify the impacts with clinical meaning – which will also mean looking at other factors, such as the duration and direction of the impact, as well.

[…]

Then there is the ability to use the smart mouthguard to track load over time. “It’s one thing to assist to identify concussions,” he says. “It’s another entirely to say it’s going to allow coaches and players to track exactly how many significant head impacts they have in a career – especially with all the focus on long-term health risks. If they can manage that load, particularly in training, that has performance and welfare benefits.”

[…]

Source: AI and smart mouthguards: the new frontline in fight against brain injuries | Sport | The Guardian

Code.org Presses Washington To Make Computer Science a High School Graduation Requirement – this should be everywhere globally

In July, Seattle-based and tech-backed nonprofit Code.org announced its 10th policy recommendation for all states “to require all students to take computer science (CS) to earn a high school diploma.” In August, Washington State Senator Lisa Wellman phoned-in her plans to introduce a bill to make computer science a Washington high school graduation requirement to the state’s Board of Education, indicating that the ChatGPT-sparked AI craze and Code.org had helped convince her of the need. Wellman, a former teacher who worked as a Programmer/System Analyst in the 80’s before becoming an Apple VP (Publishing) in the ’90s, also indicated that exposure to CS given to students in fifth grade could be sufficient to satisfy a HS CS requirement. In 2019, Wellman sponsored Microsoft-supported SB 5088 (Bill details), which required all Washington state public high schools to offer a CS class. Wellman also sponsored SB 5299 in 2021, which allows high school students to take a computer science elective in place of a third year math or science course (that may be required for college admission) to count towards graduation requirements.

And in October, Code.org CEO Hadi Partovi appeared before the Washington State Board of Education, driving home points Senator Wellman made in August with a deck containing slides calling for Washington to “require that all students take computer science to earn a high school diploma” and to “require computer science within all teacher certifications.” Like Wellman, Partovi suggested the CS high school requirement might be satisfied by middle school work (he alternatively suggested one year of foreign language could be dropped to accommodate a HS CS course). Partovi noted that Washington contained some of the biggest promoters of K-12 CS in Microsoft Philanthropies’ TEALS (TEALS founder Kevin Wang is a member of the Washington State Board of Education) and Code.org, as well some of the biggest funders of K-12 CS in Amazon and Microsoft — both which are $3,000,000+ Platinum Supporters of Code.org and have top execs on Code.org’s Board of Directors.

Source: Code.org Presses Washington To Make Computer Science a High School Graduation Requirement – Slashdot

Most kids have no clue how a computer works, let alone how to program one. It’s not difficult but an essential skill in today’s society.

The AI Act needs a practical definition of ‘subliminal techniques’ (because those used in Advertising aren’t enough)

While the draft EU AI Act prohibits harmful ‘subliminal techniques’, it doesn’t define the term – we suggest a broader definition that captures problematic manipulation cases without overburdening regulators or companies, write Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding and Rafael A. Calvo.

Juan Pablo Bermúdez is a Research Associate at Imperial College London; Rune Nyrup is an Associate Professor at Aarhus University; Sebastian Deterding is a Chair in Design Engineering at Imperial College London; Rafael A. Calvo is a Chair in Engineering Design at Imperial College London.

If you ever worried that organisations use AI systems to manipulate you, you are not alone. Many fear that social media feeds, search, recommendation systems, or chatbots can unconsciously affect our emotions, beliefs, or behaviours.

The EU’s draft AI Act articulates this concern mentioning “subliminal techniques” that impair autonomous choice “in ways that people are not consciously aware of, or even if aware not able to control or resist” (Recital 16, EU Council version). Article 5 prohibits systems using subliminal techniques that modify people’s decisions or actions in ways likely to cause significant harm.

This prohibition could helpfully safeguard users. But as written, it also runs the risk of being inoperable. It all depends on how we define ‘subliminal techniques’ – which the draft Act does not do yet.

Why narrow definitions are bound to fail

The term ‘subliminal’ traditionally refers to sensory stimuli that are weak enough to escape conscious perception but strong enough to influence behaviour; for example, showing an image for less than 50 milliseconds.

Defining ‘subliminal techniques’ in this narrow sense presents problems. First, experts agree that subliminal stimuli have very short-lived effects at best, and only move people to do things they are already motivated to do.

Further, this would not cover most problematic cases motivating the prohibition: when an online ad influences us, we are aware of the sensory stimulus (the visible ad).

Furthermore, such legal prohibitions have been ineffective because subliminal stimuli are, by definition, not plainly visible. As Neuwirth’s historical analysis shows, Europe prohibited subliminal advertising more than three decades ago, but regulators have hardly ever pursued cases.

Thus, narrowly defining ‘subliminal techniques’ as subliminal stimulus presentation is likely to miss most manipulation cases of concern and end up as dead letter.

A broader definition can align manipulation and practical concerns

We agree with the AI Act’s starting point: AI-driven influence is often problematic due to lack of awareness.

However, unawareness of sensory stimuli is not the key issue. Rather, as we argue in a recent paper, manipulative techniques are problematic if they hide any of the following:

  • The influence attempt. Many internet users are not aware that websites adapt based on personal information to optimize “customer engagement”, sales, or other business concerns. Web content is often tailored to nudge us towards certain behaviours, while we remain unaware that such tailoring occurs.
  • The influence methods. Even when we know that some online content seeks to influence, we frequently don’t know why we are presented with a particular image or message – was it chosen through psychographic profiling, nudges, something else? Thus, we can remain unaware of how we are influenced.
  • The influence’s effects. Recommender systems are meant to learn our preferences and suggest content that aligns with them, but they can end up changing our preferences. Even if we know how we are influenced, we may still ignore how the influence changed our decisions and behaviours.

To see why this matters, ask yourself: as a user of digital services, would you rather not be informed about these influence techniques?

Or would you prefer knowing when you are targeted for influence; how influence tricks push your psychological buttons (that ‘Only 1 left!’ sign targets your aversion to loss); and what consequences influence is likely to have (the sign makes you more likely to purchase impulsively)?

We thus propose the following definition:

Subliminal techniques aim at influencing a person’s behaviour in ways in which the person is likely to remain unaware of (1) the influence attempt, (2) how the influence works, or (3) the influence attempt’s effects on decision-making or value- and belief-formation processes.

This definition is broad enough to capture most cases of problematic AI-driven influence; but not so broad as to become meaningless, nor excessively hard to put into practice. Our definition specifically targets techniques: procedures that predictably produce certain outcomes.

Such techniques are already being classified, for example, in lists of nudges and dark patterns, so companies can check those lists and ensure that they either don’t use them or disclose their usage.

Moreover, the AI Act prohibits, not subliminal techniques per se, but only those that may cause significant harm. Thus, the real (self-)regulatory burden lies with testing whether a system increases risks of significant harm—arguably already part of standard user protection diligence.

Conclusion

The default interpretation of ‘subliminal techniques’ would render the AI Act’s prohibition irrelevant for most forms of problematic manipulative influence, and toothless in practice.

Therefore, ensuring the AI Act is legally practicable and reduces regulatory uncertainty requires a different, explicit definition – one that addresses the underlying societal concerns over manipulation while not over-burdening service providers.

We believe our definition achieves just this balance.

(The EU Parliament draft added prohibitions of “manipulative or deceptive techniques”, which present challenges worth discussing separately. Here we claim that subliminal techniques prohibitions, properly defined, could tackle manipulation concerns.)

Source: The AI Act needs a practical definition of ‘subliminal techniques’ – EURACTIV.com

Zoom CEO Says It’s Hard to Build Trust Over Zoom

In the wake of the onslaught of the covid-19, employees across the world grew chummy with a perfectly appropriate remote work schedule that allows them to work from home. However, one of the companies that carried pandemic digital infrastructure on its back, Zoom, isn’t too keen on keeping remote workers away from the office since the video calling platform is making them too friendly, according to leaked audio of CEO Eric Yuan at an all-hands meeting at the company.

Insider first reported on the recording in which Yuan told employees within 50 miles of an office that they must report to the office a minimum of two days a week. The announcement came at a companywide meeting on August 3, during which Yuan said that it’s difficult for Zoomies—the pet name the company gives to employees—to build trust with each other on a computer screen. Yuan also reportedly added that it’s difficult to have innovative conversations and debates on the company’s own platform because it makes people too friendly.

“Over the past several years, we’ve hired so many new ‘Zoomies’ that it’s really hard to build trust,” Yuan said in the audio. “We cannot have a great conversation. We cannot debate each other well because everyone tends to be very friendly when you join a Zoom call.”

Zoom did not immediately return Gizmodo’s request for comment on the audio or when employees are expected to return to the office.

Yuan’s proposed hybrid schedule is not a huge ask as a lot of competently run companies are finding a happy medium between remote work and wholly in-office routine through hybrid arrangements. Yuan’s comments, however, point more toward the company’s beliefs in the ability of its platform—it makes you too friendly and is unable to help you build trust with the guests on your call or help you innovate.

While Yuan may have put his foot in his mouth, he is far from the first tech CEO to ask employees to return to office post-covid-19 lockdowns. Earlier this summer, Meta CEO Mark Zuckerberg mandated three days per week in the office for his employees, while Apple has reportedly begun taking attendance of those in the office. Some companies, however, have seen plenty of friction in mandating a return to in-office work, like Amazon, whose employees have staged a walkout in protest. During the height of the pandemic, a majority of big tech companies and their employees saw the promise in a completely remote schedule, which was touted as a massive perk during a hiring boom and helped these companies grow exponentially. Now that the likes of Zoom, Amazon, and Meta are scaling back on that perk, they may be facing increasing backlash from their workforce.

Source: Zoom CEO Says It’s Hard to Build Trust Over Zoom

CNET Deletes Thousands of Old Articles to Game Google Search

Tech news website CNET has deleted thousands of old articles over the past few months in a bid to improve its performance in Google Search results, Gizmodo has learned.

Archived copies of CNET’s author pages show the company deleted small batches of articles prior to the second half of July, but then the pace increased. Thousands of articles disappeared in recent weeks. A CNET representative confirmed that the company was culling stories but declined to share exactly how many it has taken down.

[…]

Taylor Canada, CNET’s senior director of marketing and communications. “In an ideal world, we would leave all of our content on our site in perpetuity. Unfortunately, we are penalized by the modern internet for leaving all previously published content live on our site.”

[…]

CNET shared an internal memo about the practice. Removing, redirecting, or refreshing irrelevant or unhelpful URLs “sends a signal to Google that says CNET is fresh, relevant and worthy of being placed higher than our competitors in search results,” the document reads.

According to the memo about the “content pruning,” the company considers a number of factors before it “deprecates” an article, including SEO, the age and length of the story, traffic to the article, and how frequently Google crawls the page. The company says it weighs historical significance and other editorial factors before an article is taken down. When an article is slated for deletion, CNET says it maintains its own copy, and sends the story to the Internet Archive’s Wayback Machine.

[…]

Google does not recommend deleting articles just because they’re considered “older,” said Danny Sullivan, the company’s Public Liaison for Google Search. In fact, the practice is something Google has advised against for years. After Gizmodo’s request for comment, Sullivan posted a series of tweets on the subject.

“Are you deleting content from your site because you somehow believe Google doesn’t like ‘old’ content? That’s not a thing! Our guidance doesn’t encourage this,” Sullivan tweeted.

[…]

However, SEO experts told Gizmodo content pruning can be a useful strategy in some cases, but it’s an “advanced” practice that requires high levels of expertise,[…]

Ideally outdated pages should be updated or redirected to a more relevant URL, and deleting content without a redirect should be a last resort. With fewer irrelevant pages on your site, the idea is that Google’s algorithms will be able to index and better focus on the articles or pages a publisher does want to promote.

Google may have an incentive to withhold details about its Search algorithm, both because it would rather be able to make its own decisions about how to rank websites, and because content pruning is a delicate process that can cause problems for publishers—and for Google—if it’s mishandled.

[…]

Whether or not deleting articles is an effective business strategy, it causes other problems that have nothing to do with search engines. For a publisher like CNET — one of the oldest tech news sites on the internet — removing articles means losing parts of the public record that could have unforeseen historical significance in the future.

[…]

Source: CNET Deletes Thousands of Old Articles to Game Google Search

That’s a big chunk of history gone there

AI shows classroom conversations predict academic success

Who would have thought in-class, off-topic dialog can be a significant predictor of a student’s success in school? Scientists at Tsinghua University had a hunch and decided to deep-dive into how and AI may help an under-studied segment of the education pool: K-6th grade students learning in live, online classes.

By analyzing the classroom dialogs of these children, scientists at Tsinghua University developed neural network models to predict what behaviors may lead to a more successful student.

[…]

The researchers published their results in the Journal of Social Computing on March, 31. Valid findings were drawn from the data recorded and the models used that can be used to accurately predict .

“The most important message from this paper is that high-performing students, regardless of whether they are enrolled in STEM or non-STEM courses, consistently exhibit more , higher-level interactions concerning , and active participation in off-topic dialogs throughout the lesson,” said Jarder Luo, author and researcher of the study.

The implication here is that above the other markers of a successful student, which are cognition and positive emotion, the most important predictor of performance for STEM and non-STEM students is the interactive type of that student. In STEM students, the most crucial situation interactive types play in learning is during the middle stage of the lesson. In contrast, non-STEM students’ interactive types have about the same effect on the student’s performance during the middle and summary stages of the lesson.

Interactive dialog between students helps to streamline and integrate along with knowledge building; these open conversations help the young students navigate conversations generally, but more specifically conversations on topics the student is likely not very familiar with. This could be why the data so strongly suggests students more active in classroom dialog are typically higher-performing.

Additionally, the study also found that meta-cognition, that is, “thinking about thinking” is found to be more prevalent in higher-performing, non-STEM students than their STEM counterparts. This could be in part because science is often taught in a way that builds on a basis of knowledge, whereas other areas of study require a bit more planning and evaluation of the material.

[…]

Source: How AI can use classroom conversations to predict academic success

More information: Yuanyi Zhen et al, Prediction of Academic Performance of Students in Online Live Classroom Interactions—An Analysis Using Natural Language Processing and Deep Learning Methods, Journal of Social Computing (2023). DOI: 10.23919/JSC.2023.0007

Big Business Isn’t Happy With FTC’s ‘Click to Cancel’ Proposal – says people enjoy tortuous cancellations

The Federal Trade Commission’s recent proposal to require that companies offer customers easy one-click options to cancel subscriptions might seem like a no-brainer, something unequivocally good for consumers. Not according to the companies it would affect, though. In their view, the introduction of simple unsubscribe buttons could lead to a wave of accidental cancellations by dumb customers. Best, they say, to let big businesses protect customers from themselves and make it a torment to stop your service

Those were some of the points shared by groups representing major publishers and advertisers during the FTC’s recent public comment period ending in June. Consumers, according to the Wall Street Journal, generally appeared eager for the new proposals which supporters say could make a dent in tricky, bordering-on deceptive anti-cancellation tactics deployed by cable companies, entertainment sites, gyms, and other businesses who game out ways to make it as difficult as possible to quickly quit a subscription

[…]

Source: Big Business Isn’t Happy With FTC’s ‘Click to Cancel’ Proposal

AI Tool Decodes Brain Cancer’s Genome During Surgery

Scientists have designed an AI tool that can rapidly decode a brain tumor’s DNA to determine its molecular identity during surgery — critical information that under the current approach can take a few days and up to a few weeks.

Knowing a tumor’s molecular type enables neurosurgeons to make decisions such as how much brain tissue to remove and whether to place tumor-killing drugs directly into the brain — while the patient is still on the operating table.

[…]

A report on the work, led by Harvard Medical School researchers, is published July 7 in the journal Med.

Accurate molecular diagnosis — which details DNA alterations in a cell — during surgery can help a neurosurgeon decide how much brain tissue to remove. Removing too much when the tumor is less aggressive can affect a patient’s neurologic and cognitive function. Likewise, removing too little when the tumor is highly aggressive may leave behind malignant tissue that can grow and spread quickly.

[…]

Knowing a tumor’s molecular identity during surgery is also valuable because certain tumors benefit from on-the-spot treatment with drug-coated wafers placed directly into the brain at the time of the operation, Yu said.

[…]

The tool, called CHARM (Cryosection Histopathology Assessment and Review Machine), is freely available to other researchers. It still has to be clinically validated through testing in real-world settings and cleared by the FDA before deployment in hospitals, the research team said.

[…]

CHARM was developed using 2,334 brain tumor samples from 1,524 people with glioma from three different patient populations. When tested on a never-before-seen set of brain samples, the tool distinguished tumors with specific molecular mutations at 93 percent accuracy and successfully classified three major types of gliomas with distinct molecular features that carry different prognoses and respond differently to treatments.

Going a step further, the tool successfully captured visual characteristics of the tissue surrounding the malignant cells. It was capable of spotting telltale areas with greater cellular density and more cell death within samples, both of which signal more aggressive glioma types.

The tool was also able to pinpoint clinically important molecular alterations in a subset of low-grade gliomas, a subtype of glioma that is less aggressive and therefore less likely to invade surrounding tissue. Each of these changes also signals different propensity for growth, spread, and treatment response.

The tool further connected the appearance of the cells — the shape of their nuclei, the presence of edema around the cells — with the molecular profile of the tumor. This means that the algorithm can pinpoint how a cell’s appearance relates to the molecular type of a tumor.

[…]

Source: AI Tool Decodes Brain Cancer’s Genome During Surgery | Harvard Medical School