A newly published study in the the Lancet Oncology journal has found that the use of AI in mammogram cancer screening can safely cut radiologist workloads nearly in half without risk of increasing false-positive results. In effect, the study found that the AI’s recommendations were on par with those of two radiologists working together.
“AI-supported mammography screening resulted in a similar cancer detection rate compared with standard double reading, with a substantially lower screen-reading workload, indicating that the use of AI in mammography screening is safe,” the study found.
The study was performed by a research team out of Lund University in Sweden and, accordingly, followed 80,033 Swedish women (average age of 54) for just over a year in 2021-2022 . Of the 39,996 patients that were randomly assigned AI-empowered breast cancer screenings, 28 percent or 244 tests returned screen-detected cancers. Of the other 40,024 patients that received conventional cancer screenings, just 25 percent, or 203 tests, returned screen-detected cancers.
Of those extra 41 cancers detected by the AI side, 19 turned out to be invasive. Both the AI-empowered and conventional screenings ran a 1.5 percent false positive rate. Most impressively, radiologists on the the AI side had to look at 36,886 fewer screen readings than their counterparts, a 44 percent reduction in their workload.
Keith Thomas from New York was involved in a driving accident back in 2020 that injured his spine’s C4 and C5 vertebrae, leading to a total loss in feeling and movement from the chest down. Recently, though, Thomas had been able to move his arm at will and feel his sister hold his hand, thanks to the AI brain implant technology developed by the Northwell Health’s Feinstein Institute of Bioelectronic Medicine.
The research team first spent months mapping his brain with MRIs to pinpoint the exact parts of his brain responsible for arm movements and the sense of touch in his hands. Then, four months ago, surgeons performed a 15-hour procedure to implant microchips into his brain — Thomas was even awake for some parts so he could tell them what sensations he was feeling in his hand as they probed parts of the organ.
While the microchips are inside his body, the team also installed external ports on top of his head. Those ports connect to a computer with the artificial intelligence (AI) algorithms that the team developed to interpret his thoughts and turn them into action. The researchers call this approach “thought-driven therapy,” because it all starts with the patient’s intentions. If he thinks of wanting to move his hand, for instance, his brain implant sends signals to the computer, which then sends signals to the electrode patches on his spine and hand muscles in order to stimulate movement. They attached sensors to his fingertips and palms, as well, to stimulate sensation.
Thanks to this system, he was able to move his arm at will and feel his sister holding his hand in the lab. While he needed to be attached to the computer for those milestones, the researchers say Thomas has shown signs of recovery even when the system is off. His arm strength has apparently “more than doubled” since the study began, and his forearm and wrist could now feel some new sensations. If all goes well, the team’s thought-driven therapy could help him regain more of his sense of touch and mobility.
While the approach has a ways to go, the team behind it is hopeful that it could change the lives of people living with paralysis.
Scientists have combined two light wavelengths to deactivate a bacterium that is invulnerable to some of the world’s most widely used antibiotics, giving hope that the regime could be adapted as a potential disinfectant treatment.
Under the guidance of project leader Dr. Gale Brightwell, scientists at New Zealand’s AgResearch demonstrated the novel antimicrobial efficiency of a combination of two light wavelengths against a ‘superbug’ known as antibiotic-resistant extended-spectrum beta-lactamase E. coli.
[…]
A combination of far UVC (222 nm) and blue LED (405 nm) light have been shown to be effective in the inactivation of a wide range of microorganisms while being much safer to use and handle as compared to traditional UVC at 254 nm, she said.
“The E. coli we chose for this investigation were extended-spectrum beta- lactamases producing E. coli (ESBL-Ec) as these bacteria produce enzymes that break down and destroy commonly used antibiotics, including penicillins and cephalosporins, making these drugs ineffective for treating infections,” she said.
[…]
The team found that a combination of dual far-UVC and blue LED light could be used to disinfect both antibiotic resistant and antibiotic sensitive E. coli, offering a non-thermal technology that may not drive further antibiotic resistance.
However, if exposed to sub-lethal levels of dual and far-UVC light, the antibiotic resistant E. coli tested did exhibit light tolerance. One surprising finding was that this light tolerance was only exhibited by the antibiotic resistant E. coli and not the antibiotic sensitive E. coli that was also tested.
Gardner says further work is needed to understand whether the light tolerance is due to a genetic change, or some other mechanism.
“It is also important to investigate the development of light tolerance in other antimicrobial-resistant bacteria and to determine the minimum dose of far-UVC light that can create light tolerance as well as the potential of further resistance development to other things such as sanitizers, heat, and pH in bacteria for application purposes,” she said.
The findings are published in the Journal of Applied Microbiology.
More information: Amanda Gardner et al, Light Tolerance of Extended Spectrum ß-lactamase Producing Escherichia coli Strains After Repetitive Exposure to Far-UVC and Blue LED Light, Journal of Applied Microbiology (2023). DOI: 10.1093/jambio/lxad124
The team at NASA made three animations, all showing the carbon dioxide levels throughout the year 2021. Each one shows four major contributors: fossil fuels, burning biomass, land ecosystems, and the oceans. In the view showing North and South America, we can see the results of plants absorbing the gas via photosynthesis and then releasing it during winter months. There are intense contributions along the northeastern seaboard of the U.S. mainly by emissions from fossil fuels burning. There’s also a rise and fall of the gas over the Amazon rainforest. The team also interprets this as plants absorbing carbon during the day and then releasing it at night.
Carbon dioxide measurements over North and South America in 2021. NASA’s Scientific Visualization Studio [NB: for the impatient, skip forward through the videos to see how fast this stuff is covering more and more of the planet]
The animations also show sources and sinks (where CO2 is absorbed) in Asia and show an incredible amount of fossil fuel emissions over China. In other parts of the world, such as Australia, the absorption of this gas is much higher, with lower emissions due to lower populations.
This greenhouse-induced climate change is a complex process to study, but it’s clear that carbon dioxide is part of it. There are two sources of it here on Earth: natural and human-caused. Natural CO2 sources provide most of this gas released into the atmosphere. These include oceans, animal and plant respiration, decomposition of organic matter, forest fires, and volcanic eruptions. Scientists know of some naturally occurring CO2 deposits in Earth’s crust that could also serve as CO2 sources. There are also “sinks”, where the gas gets trapped for some period of time. The oceans, (particularly the southern ocean), soil, and forests all “suck it in”, along with other plants. Those same sinks can release their stores of this gas.
Human-caused (or “anthropogenic”) sources include power generation, chemical production, agricultural practices, and transportation. Note that most of these involve fossil fuel burning. Fossil fuels are natural gas, coal, and oil.
How CO2 Cycles Over Time
The carbon cycle, which helps trace carbon dioxide on Earth. Courtesy: NOAA
So, we know that carbon dioxide goes through a natural “cycle” where it is exchanged in the air, ground, oceans, plants, humans, and animals. Throughout most of history, this cycle kept the seasonal average of CO2 in the atmosphere around an estimated 280 parts per million (ppm). In modern times, fossil fuel burning and other human activities added more CO2 to the cycle and changed the amount of it in the atmosphere. That pace has accelerated to the point where the levels are up by 50% in less than 200 years. Today the amount of CO2 is around 441 ppm and it continues to rise as we pump more of the gas into the air. Climatologists predict that as it rises, the average global temperature will continue to rise along with it.
If we look at average global temperatures since historical measurements began (when we were pumping less CO2 into the air), Earth’s temperature has risen about 0.08 C (0.14 F) each decade. Natural variability plays some role, but the addition of more carbon dioxide plays an increasing role. Over time, heating has added up to a 2-degree rise over more than a century. It tracks with the increasing amounts of this gas in our air. Two degrees is a lot; even one degree is enough to cause significant effects. To give you an idea, in the distant past, when global averages dropped by a degree or two, Earth suffered what’s called the Little Ice Age.
Warming Drives Change
A chart showing how global temperatures changed from 1880 to 2020. Courtesy MET Office Hadley Centre/Climactic Research Unit.
It may not sound like much, but two degrees is enough to drive change in our weather patterns, water cycles, and other environmental processes. That gradual warming is why experts often refer to “global warming”. It’s not that everywhere gets hot at once. It means that the average annual air temperature is rising. To give you an idea, the year 2022 was the sixth warmest year since people began keeping global records in 1880.
Maps and animations of CO2 sources, sinks, and cycles like the ones from NASA satellite data show in stark detail the cycle of this particular gas. The idea is to help people understand visually and intellectually what changes our atmosphere experiences over time.
[…] Taking the drug osimertinib after surgery dramatically reduced the risk of patients dying by 51%, results presented at the world’s largest cancer conference showed.
[…]
“Fifty per cent is a big deal in any disease, but certainly in a disease like lung cancer, which has typically been very resistant to therapies.”
The Adaura trial involved patients aged between 30 and 86 in 26 countries and looked at whether the pill could help non-small cell lung cancer patients, the most common form of the disease.
Everyone in the trial had a mutation of the EGFR gene, which is found in about a quarter of global lung cancer cases, and accounts for as many as 40% of cases in Asia. An EGFR mutation is more common in women than men, and in people who have never smoked or have been light smokers.
[…]
After five years, 88% of patients who took the daily pill after the removal of their tumour were still alive, compared with 78% of patients treated with a placebo. Overall, there was a 51% lower risk of death for those who received osimertinib compared with those who received placebo.
Scientists have used artificial intelligence (AI) to discover a new antibiotic that can kill a deadly species of superbug.
The AI helped narrow down thousands of potential chemicals to a handful that could be tested in the laboratory.
The result was a potent, experimental antibiotic called abaucin, which will need further tests before being used.
The researchers in Canada and the US say AI has the power to massively accelerate the discovery of new drugs.
It is the latest example of how the tools of artificial intelligence can be a revolutionary force in science and medicine.
[…]
To find a new antibiotic, the researchers first had to train the AI. They took thousands of drugs where the precise chemical structure was known, and manually tested them on Acinetobacter baumannii to see which could slow it down or kill it.
This information was fed into the AI so it could learn the chemical features of drugs that could attack the problematic bacterium.
The AI was then unleashed on a list of 6,680 compounds whose effectiveness was unknown. The results – published in Nature Chemical Biology – showed it took the AI an hour and a half to produce a shortlist.
The researchers tested 240 in the laboratory, and found nine potential antibiotics. One of them was the incredibly potent antibiotic abaucin.
Laboratory experiments showed it could treat infected wounds in mice and was able to kill A. baumannii samples from patients.
However, Dr Stokes told me: “This is when the work starts.”
The next step is to perfect the drug in the laboratory and then perform clinical trials. He expects the first AI antibiotics could take until 2030 until they are available to be prescribed.
Curiously, this experimental antibiotic had no effect on other species of bacteria, and works only on A. baumannii.
Many antibiotics kill bacteria indiscriminately. The researchers believe the precision of abaucin will make it harder for drug-resistance to emerge, and could lead to fewer side-effects.
Gert-Jan Oskam was living in China in 2011 when he was in a motorcycle accident that left him paralyzed from the hips down. Now, with a combination of devices, scientists have given him control over his lower body again. “For 12 years I’ve been trying to get back my feet,” Mr. Oskam said in a press briefing on Tuesday. “Now I have learned how to walk normal, natural.” In a study published on Wednesday in the journal Nature, researchers in Switzerland described implants that provided a “digital bridge” between Mr. Oskam’s brain and his spinal cord, bypassing injured sections. The discovery allowed Mr. Oskam, 40, to stand, walk and ascend a steep ramp with only the assistance of a walker. More than a year after the implant was inserted, he has retained these abilities and has actually showed signs of neurological recovery, walking with crutches even when the implant was switched off. “We’ve captured the thoughts of Gert-Jan, and translated these thoughts into a stimulation of the spinal cord to re-establish voluntary movement,” Gregoire Courtine, a spinal cord specialist at the Swiss Federal Institute of Technology, Lausanne, who helped lead the research, said at the press briefing.
In the new study, the brain-spine interface, as the researchers called it, took advantage of an artificial intelligence thought decoder to read Mr. Oskam’s intentions — detectable as electrical signals in his brain — and match them to muscle movements. The etiology of natural movement, from thought to intention to action, was preserved. The only addition, as Dr. Courtine described it, was the digital bridge spanning the injured parts of the spine. […] To achieve this result, the researchers first implanted electrodes in Mr. Oskam’s skull and spine. The team then used a machine-learning program to observe which parts of the brain lit up as he tried to move different parts of his body. This thought decoder was able to match the activity of certain electrodes with particular intentions: One configuration lit up whenever Mr. Oskam tried to move his ankles, another when he tried to move his hips.
Then the researchers used another algorithm to connect the brain implant to the spinal implant, which was set to send electrical signals to different parts of his body, sparking movement. The algorithm was able to account for slight variations in the direction and speed of each muscle contraction and relaxation. And, because the signals between the brain and spine were sent every 300 milliseconds, Mr. Oskam could quickly adjust his strategy based on what was working and what wasn’t. Within the first treatment session he could twist his hip muscles. Over the next few months, the researchers fine-tuned the brain-spine interface to better fit basic actions like walking and standing. Mr. Oskam gained a somewhat healthy-looking gait and was able to traverse steps and ramps with relative ease, even after months without treatment. Moreover, after a year in treatment, he began noticing clear improvements in his movement without the aid of the brain-spine interface. The researchers documented these improvements in weight-bearing, balancing and walking tests. Now, Mr. Oskam can walk in a limited way around his house, get in and out of a car and stand at a bar for a drink. For the first time, he said, he feels like he is the one in control.
A new study examining the effects of planting a wildflower meadow in the historic grounds of King’s College, Cambridge, has demonstrated its benefits to local biodiversity and climate change mitigation.
The study, led by King’s Research Fellow Dr. Cicely Marshall, found that establishing the meadow had made a considerable impact to the wildlife value of the land, while reducing the greenhouse gas emissions associated with its upkeep.
Marshall and her colleagues, among them three King’s undergraduate students, conducted biodiversity surveys over three years to compare the species richness, abundance and composition supported by the meadow and adjacent lawn.
They found that, in spite of its small size, the wildflower meadow supported three times as many species of plants, spiders and bugs, including 14 species with conservation designations.
Terrestrial invertebrate biomass was found to be 25 times higher in the meadow, with bat activity over the meadow also being three times higher than over the remaining lawn.
The study is published May 23 in the journal Ecological Solutions and Evidence.
As well as looking at the benefits to biodiversity, Marshall and her colleagues modeled the impact of the meadow on climate change mitigation efforts, by assessing the changes in reflectivity, soil carbon sequestration, and emissions associated with its maintenance.
The reduced maintenance and fertilization associated with the meadow was found to save an estimated 1.36 tons CO2-e per hectare per year when compared with the grass lawn.
Surface reflectance increased by more than 25%, contributing to a reduced urban heat island effect, with the meadow more likely to tolerate an intensified drought regime.
Brain signals can be used to detect how much pain a person is experiencing, which could overhaul how we treat certain chronic pain conditions, a new study has suggested.
The research, published in Nature Neuroscience today, is the first time a human’s chronic-pain-related brain signals have been recorded. It could aid the development of personalized therapies for the most severe forms of pain.
[…]
Researchers from the University of California, San Francisco, implanted electrodes in the brains of four people with chronic pain. The patients then answered surveys about the severity of their pain multiple times a day over a period of three to six months. After they finished filling out each survey, they sat quietly for 30 seconds so the electrodes could record their brain activity. This helped the researchers identify biomarkers of chronic pain in the brain signal patterns, which were as unique to the individual as a fingerprint.
Next, the researchers used machine learning to model the results of the surveys. They found they could successfully predict how the patients would score the severity of their pain by examining their brain activity, says Prasad Shirvalkar, one of the study’s authors.
“The hope is that now that we know where these signals live, and now that we know what type of signals to look for, we could actually try to track them noninvasively,” he says. “As we recruit more patients, or better characterize how these signals vary between people, maybe we can use it for diagnosis.”
The researchers also found they were able to distinguish a patient’s chronic pain from acute pain deliberately inflicted using a thermal probe. The chronic-pain signals came from a different part of the brain, suggesting that it’s not just a prolonged version of acute pain, but something else entirely.
Environmental DNA sampling is nothing new. Rather than having to spot or catch an animal, instead the DNA from the traces they leave can be sampled, giving clues about their genetic diversity, their lineage (e.g. via mitochondrial DNA) and the population’s health. What caught University of Florida (UoF) researchers by surprise while they were using environmental DNA sampling to study endangered sea turtles, was just how much human DNA they found in their samples. This led them to perform a study on the human DNA they sampled in this way, with intriguing implications.
Ever since genetic sequencing became possible there have been many breakthroughs that have made it more precise, cheaper and more versatile. The argument by these UoF researchers in their paper in Nature Ecology & Evolution is that although there is a lot of potential in sampling human environmental DNA (eDNA) to study populations much like is done today already with wastewater sampling, only more universally. This could have great benefits in studying human populations much how we monitor other animal species already using their eDNA and similar materials that are discarded every day as a part of normal biological function.
The researchers were able to detect various genetic issues in the human eDNA they collected, demonstrating the viability of using it as a population health monitoring tool. The less exciting fallout of their findings was just how hard it is to prevent contamination of samples with human DNA, which could possibly affect studies. Meanwhile the big DNA elephant in the room is that of individual level tracking, which is something that’s incredibly exciting to researchers who are monitoring wild animal populations. Unlike those animals, however, homo sapiens are unique in that they’d object to such individual-level eDNA-based monitoring.
What the full implications of such new tools will be is hard to say, but they’re just one of the inevitable results as our genetic sequencing methods improve and humans keep shedding their DNA everywhere.
Research has shown that a thin cellulose film can inactivate the SARS-CoV-2 virus within minutes, inhibit the growth of bacteria including E. coli, and mitigate contact transfer of pathogens.
The coating consists of a thin film of cellulose fiber that is invisible to the naked eye, and is abrasion-resistant under dry conditions, making it suitable for use on high traffic objects such as door handles and handrails.
The coating was developed by scientific teams from the University of Birmingham, Cambridge University, and FiberLean Technologies, who worked on a project to formulate treatments for glass, metal or laminate surfaces that would deliver long-lasting protection against the COVID-19 virus.
[…]
a coating made from micro-fibrillated cellulose (MFC)
[…]
The COVID-19 virus is known to remain active for several days on surfaces such as plastic and stainless steel, but for only a few hours on newspaper.
[…]
The researchers found that the porous nature of the film plays a significant role: it accelerates the evaporation rate of liquid droplets, and introduces an imbalanced osmotic pressure across bacteria membrane.
They then tested whether the coating could inhibit surface transmission of SARS-CoV-2. Here they found a three-fold reduction of infectivity when droplets containing the virus were left on the coating for 5 minutes, and, after 10 minutes, the infectivity fell to zero.
[…]
Professor Zhang commented, “The risk of surface transmission, as opposed to aerosol transmission, comes from large droplets which remain infective if they land on hard surfaces, where they can be transferred by touch. This surface coating technology uses sustainable materials and could potentially be used in conjunction with other antimicrobial actives to deliver a long-lasting and slow-release antimicrobial effect.”
The researchers confirmed the stability of the coating by mechanical scraping tests, where the coating showed no noticeable damage when dry, but easy removal from the surface when wetted, making it convenient and suitable for daily cleaning and disinfection practice.
The paper is published in the journal ACS Applied Materials & Interfaces.
More information: Shaojun Qi et al, Porous Cellulose Thin Films as Sustainable and Effective Antimicrobial Surface Coatings, ACS Applied Materials & Interfaces (2023). DOI: 10.1021/acsami.2c23251
When neuropsychologist Bernhard Sabel put his new fake-paper detector to work, he was “shocked” by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%. Both numbers, which he and colleagues report in a medRxiv preprint posted on 8 May, are well above levels they calculated for 2010—and far larger than the 2% baseline estimated in a 2022 publishers’ group report.
[…]
Journals are awash in a rising tide of scientific manuscripts from paper mills—secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship. “Paper mills have made a fortune by basically attacking a system that has had no idea how to cope with this stuff,” says Dorothy Bishop, a University of Oxford psychologist who studies fraudulent publishing practices. A 2 May announcement from the publisher Hindawi underlined the threat: It shut down four of its journals it found were “heavily compromised” by articles from paper mills.
Sabel’s tool relies on just two indicators—authors who use private, noninstitutional email addresses, and those who list an affiliation with a hospital. It isn’t a perfect solution, because of a high false-positive rate. Other developers of fake-paper detectors, who often reveal little about how their tools work, contend with similar issues.
[…]
To fight back, the International Association of Scientific, Technical, and Medical Publishers (STM), representing 120 publishers, is leading an effort called the Integrity Hub to develop new tools. STM is not revealing much about the detection methods, to avoid tipping off paper mills. “There is a bit of an arms race,” says Joris van Rossum, the Integrity Hub’s product director. He did say one reliable sign of a fake is referencing many retracted papers; another involves manuscripts and reviews emailed from internet addresses crafted to look like those of legitimate institutions.
Twenty publishers—including the largest, such as Elsevier, Springer Nature, and Wiley—are helping develop the Integrity Hub tools, and 10 of the publishers are expected to use a paper mill detector the group unveiled in April. STM also expects to pilot a separate tool this year that detects manuscripts simultaneously sent to more than one journal, a practice considered unethical and a sign they may have come from paper mills.
[…]
STM hasn’t yet generated figures on accuracy or false-positive rates because the project is too new. But catching as many fakes as possible typically produces more false positives. Sabel’s tool correctly flagged nearly 90% of fraudulent or retracted papers in a test sample. However, it marked up to 44% of genuine papers as fake, so results still need to be confirmed by skilled reviewers.
[…]
Publishers embracing gold open access—under which journals collect a fee from authors to make their papers immediately free to read when published—have a financial incentive to publish more, not fewer, papers. They have “a huge conflict of interest” regarding paper mills, says Jennifer Byrne of the University of Sydney, who has studied how paper mills have doctored cancer genetics data.
The “publish or perish” pressure that institutions put on scientists is also an obstacle. “We want to think about engaging with institutions on how to take away perhaps some of the [professional] incentives which can have these detrimental effects,” van Rossum says. Such pressures can push clinicians without research experience to turn to paper mills, Sabel adds, which is why hospital affiliations can be a red flag.
A closed approach to building a detection tool is an incredibly bad idea – no-one can really know what it is doing and certain types of research will be flagged every time, for example. This type of tool especially needs to be accountable and changeable to the peers who have to review the papers this tool spits out as suspect. Only by having this type of tool open, can it be improved by third parties who also have a vested interest in improving the fake detection rates (eg universities, who you would think have quite some smart people there). Having it closed also lends a false sense of security – especially if the detection methods already have been leaked and papers mills from certain sources are circumventing them already. Security by obscurity is never ever a good idea.
Finding, cultivating, and bioengineering organisms that can digest plastic not only aids in the removal of pollution, but is now also big business. Several microorganisms that can do this have already been found, but when their enzymes that make this possible are applied at an industrial scale, they typically only work at temperatures above 30°C.
The heating required means that industrial applications remain costly to date, and aren’t carbon-neutral. But there is a possible solution to this problem: finding specialist cold-adapted microbes whose enzymes work at lower temperatures.
Scientists from the Swiss Federal Institute WSL knew where to look for such microorganisms: at high altitudes in the Alps of their country, or in the polar regions. Their findings are published in Frontiers in Microbiology.
“Here we show that novel microbial taxa obtained from the ‘plastisphere’ of alpine and arctic soils were able to break down biodegradable plastics at 15°C,” said first author Dr. Joel Rüthi, currently a guest scientist at WSL. “These organisms could help to reduce the costs and environmental burden of an enzymatic recycling process for plastic.”
[…]
None of the strains were able to digest PE, even after 126 days of incubation on these plastics. But 19 (56%) of strains, including 11 fungi and eight bacteria, were able to digest PUR at 15°C, while 14 fungi and three bacteria were able to digest the plastic mixtures of PBAT and PLA. Nuclear Magnetic Resonance (NMR) and a fluorescence-based assay confirmed that these strains were able to chop up the PBAT and PLA polymers into smaller molecules.
[…]
The best performers were two uncharacterized fungal species in the genera Neodevriesia and Lachnellula: these were able to digest all of the tested plastics except PE.
[…]
The best performers were two uncharacterized fungal species in the genera Neodevriesia and Lachnellula: these were able to digest all of the tested plastics except PE.
an alarming new study has found that even when plastic makes it to a recycling center, it can still end up splintering into smaller bits that contaminate the air and water. This pilot study focused on a single new facility where plastics are sorted, shredded, and melted down into pellets. Along the way, the plastic is washed several times, sloughing off microplastic particles—fragments smaller than 5 millimeters—into the plant’s wastewater.
Because there were multiple washes, the researchers could sample the water at four separate points along the production line. (They are not disclosing the identity of the facility’s operator, who cooperated with their project.) This plant was actually in the process of installing filters that could snag particles larger than 50 microns (a micron is a millionth of a meter), so the team was able to calculate the microplastic concentrations in raw versus filtered discharge water—basically a before-and-after snapshot of how effective filtration is.
Their microplastics tally was astronomical. Even with filtering, they calculate that the total discharge from the different washes could produce up to 75 billion particles per cubic meter of wastewater. Depending on the recycling facility, that liquid would ultimately get flushed into city water systems or the environment. In other words, recyclers trying to solve the plastics crisis may in fact be accidentally exacerbating the microplastics crisis, which is coating everycorneroftheenvironment with synthetic particles.
[…]
The good news here is that filtration makes a difference: Without it, the researchers calculated that this single recycling facility could emit up to 6.5 million pounds of microplastic per year. Filtration got it down to an estimated 3 million pounds. “So it definitely was making a big impact when they installed the filtration,” says Brown. “We found particularly high removal efficiency of particles over 40 microns.”
[…]
Depending on the recycling facility, that wastewater might next flow to a sewer system and eventually to a treatment plant that is not equipped to filter out such small particles before pumping the water into the environment. But, says Enck, “some of these facilities might be discharging directly into groundwater. They’re not always connected to the public sewer system.” That means the plastics could end up in the water people use for drinking or irrigating crops.
The full extent of the problem isn’t yet clear, as this pilot study observed just one facility. But because it was brand-new, it was probably a best-case scenario, says Steve Allen, a microplastics researcher at the Ocean Frontiers Institute and coauthor of the new paper. “It is a state-of-the-art plant, so it doesn’t get any better,” he says. “If this is this bad, what are the others like?”
[…]
Still, researchers like Brown don’t think that we should abandon recycling. This new research shows that while filters can’t stop all the microplastics from leaving a recycling facility, they at least help substantially. “I really don’t want it to suggest to people that we shouldn’t recycle, and to give it a completely negative reputation,” she says. “What it really highlights is that we just really need to consider the impacts of the solutions.”
Scientists and anti-pollution groups agree that the ultimate solution isn’t relying on recycling or trying to pull trash out of the ocean, but massively cutting plastic production. “I just think this illustrates that plastics recycling in its traditional form has some pretty serious problems,” says Enck. “This is yet another reason to do everything humanly possible to avoid purchasing plastics.”
[…] a team of researchers from the École Polytechnique Fédérale de Lausanne (EPFL) successfully developed a machine-learning algorithm that can decode a mouse’s brain signals and reproduce images of what it’s seeing.
[…]
The mice were shown a black and white movie clip from the 1960s of a man running to a car and then opening its trunk. While the mice were watching the clip, scientists measured and recorded their brain activity using two approaches: electrode probes inserted into their brains’ visual cortex region, as well as optical probes for mice that had been genetically engineered so that the neurons in their brains glow green when firing and transmitting information. That data was then used to train a new machine learning algorithm called CEBRA.
See through the eyes of a mouse by decoding brain signals
When then applied to the captured brain signals of a new mouse watching the black and white movie clip for the first time, the CEBRA algorithm was able to correctly identify specific frames the mouse was seeing as it watched. Because CEBRA was also trained on that clip, it was also able to generate matching frames that were a near perfect match, but with the occasional telltale distortions of AI-generated imagery.
[…]
This research involved a very specific (and short) piece of footage that the machine learning algorithm was also familiar with. In its current form, CEBRA also really only takes into account the activity from about 1% of the neurons in a mouse’s brain, so there’s definitely room for its accuracy and capabilities to improve. The research also isn’t just about decoding what a brain sees. A study, published in the journal, Nature, shows that CEBRA can also be used to “predict the movements of the arm in primates,” and “reconstruct the positions of rats as they freely run around an arena.” It’s a potentially far more accurate way to peer into the brain, and understand how all the neural activity correlates to what is being processed.
[…] Toucan, a browser extension, is trying a different approach, and it might just be the thing that finally clicks for you.
How Toucan works
With Toucan installed for either Chrome, Edge, or Safari, the first time you visit a website or click on an article, you’ll notice something strange: Some of the words on the page will change, and translate to your chosen language. If you’re trying to learn Portuguese, you might see a sentence like esta, but one or two palavras will be translated.
Hover your cursor over the translated word, and a pop up will reveal what it means in English. (“Esta” is “this;’ “palavras” is “words.”) This pop up gives you additional interesting controls, such as a speaker icon you can click to hear how the word is pronounced, a mini quiz to see if you can spell the word, and a save button to highlight the word for later.
It starts out with one word at a time, but as you learn, Toucan ups the ante, adding more words in blocks, or “lexical chunks.” It makes sense, since languages don’t all share the same grammar structure. By building up to larger groups of words, you’ll more naturally learn word order, verb conjugation, and the general grammar of your chosen language.
[…]
According to the company, the extension is based on a theory called [second] language acquisition, which, in this context, can be summed up as: You learn languages best when you are immersed in the language in a relaxed manner, rather than attempt to drill the new words and grammar into your head over and over again. If you ever felt like high school Spanish class got you nowhere on your language acquisition journey, Toucan might argue it’s because that system isn’t effective for most people.
Of course, Toucan doesn’t take the Duolingo approach, either, hounding you with reminders to get in your studying. It wants you to put in as little effort as possible into learning a new language. When you’re using the internet as you normally do, you’re bound to visit websites and read articles you’re actually interested in. If Toucan translates some of those words to your target language, you’ll be more inclined to pick them up, since you’re already engaged with the text, rather than reading boring lesson materials. You’re doing what you always do (wasting time online) while dipping your dedos do pé into a new language.
An anonymous reader quotes a report from Reuters: Researchers said on Wednesday they have discovered that parts of the brain region called the motor cortex that govern body movement are connected with a network involved in thinking, planning, mental arousal, pain, and control of internal organs, as well as functions such as blood pressure and heart rate. They identified a previously unknown system within the motor cortex manifested in multiple nodes that are located in between areas of the brain already known to be responsible for movement of specific body parts — hands, feet and face — and are engaged when many different body movements are performed together.
The researchers called this system the somato-cognitive action network, or SCAN, and documented its connections to brain regions known to help set goals and plan actions. This network also was found to correspond with brain regions that, as shown in studies involving monkeys, are connected to internal organs including the stomach and adrenal glands, allowing these organs to change activity levels in anticipation of performing a certain action. That may explain physical responses like sweating or increased heart rate caused by merely pondering a difficult future task, they said. “Basically, we now have shown that the human motor system is not unitary. Instead, we believe there are two separate systems that control movement,” said radiology professor Evan Gordon of the Washington University School of Medicine in St. Louis, lead author of the study.
“One is for isolated movement of your hands, feet and face. This system is important, for example, for writing or speaking -movements that need to involve only the one body part. A second system, the SCAN, is more important for integrated, whole body movements, and is more connected to high-level planning regions of your brain,” Gordon said.
“Modern neuroscience does not include any kind of mind-body dualism. It’s not compatible with being a serious neuroscientist nowadays. I’m not a philosopher, but one succinct statement I like is saying, ‘The mind is what the brain does.’ The sum of the bio-computational functions of the brain makes up ‘the mind,'” said study senior author Nico Dosenbach, a neurology professor at Washington University School of Medicine. “Since this system, the SCAN, seems to integrate abstract plans-thoughts-motivations with actual movements and physiology, it provides additional neuroanatomical explanation for why ‘the body’ and ‘the mind’ aren’t separate or separable.”
A survey of plastic waste picked up in the North Pacific Subtropical Gyre—aka the Giant Pacific Garbage Patch—has revealed that the garbage is providing a home to species that would otherwise not be found in the deep ocean. Over two-thirds of the trash examined plays host to coastal marine species, many of which are clearly reproducing in what would otherwise be a foreign habitat.
The findings suggest that, as far as coastal species are concerned, there was nothing inhospitable about the open ocean other than the lack of something solid to latch on to.
[…]
To find out whether that was taking place, the researchers collected over 100 plastic debris items from the North Pacific Subtropical Gyre in late 2018/early 2019. While a handful of items could be assigned to either Asian or North American origins, most were pretty generic, such as rope and fishing netting. There was a wide variety of other items present, including bottles, crates, buckets, and household items. Some had clearly eroded significantly since their manufacture, suggesting they had been in the ocean for years.
Critically, nearly all of them had creatures living on them.
Far from home
Ninety-eight percent of the items found had some form of invertebrate living on them. In almost all cases, that included species found in the open ocean (just shy of 95 percent of the plastic). But a handful had nothing but coastal species present. And over two-thirds of the items had a mixed population of coastal and open-ocean species.
While the open-ocean species were found on more items, the researchers tended to find the same species repeatedly. That isn’t surprising, given that species adapted for a sedentary existence near the surface are infrequent in that environment. By contrast, there was far more species diversity among the coastal species that had hitched a ride out into the deeps. All told, coastal species accounted for 80 percent of the 46 taxonomic richness represented by the organisms identified.
On a per-item basis, species richness was low, with an average of only four species per item. This suggests that the primary barrier to a species colonizing an item is simply the low probability of finding it in the first place.
Significantly, the coastal species were breeding. In a number of cases, the researchers were able to identify females carrying eggs; in others, it was clear that the individuals present had a wide range of sizes, suggesting they were at different stages of maturity. Many of the species that were reproducing do so asexually, which simplifies the issue of finding a mate. Also common was a developmental pathway that skips larval stages. For many species, the larval stage is free-ranging, which would make them unlikely to re-colonize the same hunk of plastic.
The species that seemed to do best were often omnivores, or engaged in grazing or filter feeding, all options that are relatively easy to pursue without leaving the piece of plastic they called home.
A distinct ecology
One thing that struck the researchers was that the list of species present on the plastic of the North Pacific Subtropical Gyre was distinct from that found on tsunami debris. Part of that may be that some items swept across the ocean by the tsunami, like docks and boats, already had established coastal communities on them when they were lost to the sea.
[…]
With the possible exception of fishing gear and buoys, however, these plastic items likely picked up their inhabitants while passing through coastal ecosystems that were largely intact. So the colonization of these items likely represents a distinct—and ongoing—ecological process.
It also has the potential to have widespread effects on coastal ecology. While the currents that create the North Pacific Subtropical Gyre largely trap items within the Gyre, it is home to island habitats that could potentially be colonized. And it is possible that some items can cross oceans without being caught in a gyre, potentially making exchanges between coasts a relatively common occurrence in the age of plastics.
Finally, the researchers caution against a natural tendency to think of these plastic-borne coastal species as “misplaced species in an unsuitable habitat.” Instead, it appears that they are well suited to life in the open ocean as long as there’s something there that they can latch on to.
Debashis Chanda, a nanoscience researcher with the University of Central Florida, and his team have created a way to mimic nature’s ability to reflect light and create beautifully vivid color without absorbing any heat like traditional pigments do.
Chanda’s research, published in the journal Science Advances, explains and explores structural color and how people could use it to live cooler in a rapidly warming world.
Structural colors are created not from traditional pigmentation but from the arrangement of colorless materials to reflect light in certain ways. This process is how rainbows are made after it rains and how suncatchers bend light to create dazzling displays of color.
[…]
One driver for the researchers: A desire to avoid toxic materials
To create these colors, synthetic materials like heavy metals are used to create vivid paints.
“We use a lot of artificially synthesized organic molecules, lots of metal,” Chanda told NPR. “Think about your deep blues, you need cobalt, a deep red needs cadmium. They are toxic. We are polluting our nature and our whole habitat by using this kind of paint. So one of the major motivations for us was to create a color based on non-toxic material.”
So why can’t we simply use ground-up peacock feathers to recreate its vivid greens, blues and golds? It’s because they have no pigment. Some of the brightest colors in nature aren’t pigmented at all, peacock feathers included.
These bright, beautiful colors are achieved by the bending and reflection of light. The way the structure of a wing, a feather or other material reflects light back at the viewer. It doesn’t absorb any light, it beams it back out in the form of a visible color, and this is where things get interesting.
Chanda’s research began here, with his fascination with natural colors and how they are achieved in nature.
Beyond just the beautiful arrays of color that structure can create, Chanda also found that unlike pigments, structural paint does not absorb any infrared light.
Infrared light is the reason black cars get hot on sunny days and asphalt is hot to the touch in summer. Infrared light is absorbed as heat energy into these surfaces — the darker the color, the more the surface colored with it can absorb. That’s why people are advised to wear lighter colors in hotter climates and why many buildings are painted bright whites and beiges.
Chanda found that structural color paint does not absorb any heat. It reflects all infrared light back out. This means that in a rapidly warming climate, this paint could help communities keep cool.
Chanda and his team tested the impact this paint had on the temperature of buildings covered in structural paint versus commercial paints and they found that structural paint kept surfaces 20 to 30 degrees cooler.
This, Chanda said, is a massive new tool that could be used to fight rising temperatures caused by global warming while still allowing us to have a bright and colorful world.
Unlike white and black cars, structural paint’s ability to reflect heat isn’t determined by how dark the color is. Blue, black or purple structural paints reflect just as much heat as bright whites or beige. This opens the door for more colorful, cooler architecture and design without having to worry about the heat.
A little paint goes a long way
It’s not just cleaner, Chanda said. Structural paint weighs much less than pigmented paint and doesn’t fade over time like traditional pigments.
“A raisin’s worth of structural paint is enough to cover the front and back of a door,” he said.
Unlike pigments which rely on layers of pigment to achieve depth of color, structural paint only requires one thin layer of particles to fully cover a surface in color. This means that structural paint could be a boon for aerospace engineers who rely on the lowest weight possible to achieve higher fuel efficiency.
Scientists using data from the Atacama Cosmology Telescope in Chile have made a detailed map of dark matter’s distribution across a quarter of the sky.
The map shows regions the distribution of mass extending essentially as far we can see back in time; it uses the cosmic microwave background as a backdrop for the dark matter portrait. The team’s research will be presented at the Future Science with CMB x LSS conference in Kyoto, Japan.
“We have mapped the invisible dark matter across the sky to the largest distances, and clearly see features of this invisible world that are hundreds of millions of light-years across,” said Blake Sherwin, a cosmologist at the University of Cambridge, in a Princeton University release. “It looks just as our theories predict.”
[…]
the only way dark matter is observed is indirectly, in the way its gravitational effects are observed at large scales. Enter the Atacama Cosmology Telescope, which more precisely dated the universe in 2021. The telescope’s map builds on a map of the universe’s matter released earlier this year, which was produced using data from the Dark Energy Survey and the South Pole Telescope. That map upheld previous estimations of the ratio of ordinary matter to dark matter and found that the distribution of the matter was less clumpy than previously thought.
The new map homes in on a lingering concern of Einstein’s general relativity: how the most massive objects in the universe, like supermassive black holes, bend light from more distant sources. One such source is the cosmic microwave background, the most ancient detectable light, which radiates from the aftermath of the Big Bang.
The researchers effectively used the background as a backlight, to illuminate regions of greater density in the universe.
“It’s a bit like silhouetting, but instead of just having black in the silhouette, you have texture and lumps of dark matter, as if the light were streaming through a fabric curtain that had lots of knots and bumps in it,” said Suzanne Staggs, director of the Atacama Cosmology Telescope and a physicist at Princeton, in the university release.
The cosmic microwave background as seen by the European Space Agency’s Planck observatory.
“The famous blue and yellow CMB image is a snapshot of what the universe was like in a single epoch, about 13 billion years ago, and now this is giving us the information about all the epochs since,” Staggs added.
The recent analysis suggests that the dark matter was lumpy enough to fit with the standard model of cosmology, which relies on Einstein’s theory of gravity.
Eric Baxter, an astronomer at the University of Hawai’i and a co-author of the research that resulted in the February dark matter map, told Gizmodo in an email that his team’s map was sensitive to low-redshifts (meaning close by, in the more recent universe). On the other hand, the newer map focuses exclusively on the lensing of the cosmic microwave background, meaning higher redshifts and a more sweeping scale.
“Said another way, our measurements and the new ACT measurements are probing somewhat different (and complementary) aspects of the matter distribution,” Baxter said. “Thus, rather than contradicting our previous results, the new results may be providing an important new piece of the puzzle about possible discrepancies with our standard cosmological model.”
“Perhaps the Universe is less lumpy than expected on small scales and at recent times (i.e. the regime probed by our analysis), but is consistent with expectations at earlier times and at larger scales,” Baxter added.
New instruments should help tease out the matter distribution of the universe. An upcoming telescope at the Simons Observatory in the Atacama is set to begin operations in 2024 and will map the sky nearly 10 times faster than the Atacama Cosmology Telescope, according to the Princeton release.
Researchers have discovered that in the exotic conditions of the early universe, waves of gravity may have shaken space-time so hard that they spontaneously created radiation.
[…]
a team of researchers have discovered that an exotic form of parametric resonance may have even occurred in the extremely early universe.
Perhaps the most dramatic event to occur in the entire history of the universe was inflation. This is a hypothetical event that took place when our universe was less than a second old. During inflation our cosmos swelled to dramatic proportions, becoming many orders of magnitude larger than it was before. The end of inflation was a very messy business, as gravitational waves sloshed back and forth throughout the cosmos.
Normally gravitational waves are exceedingly weak. We have to build detectors that are capable of measuring distances less than the width of an atomic nucleus to find gravitational waves passing through the Earth. But researchers have pointed out that in the extremely early universe these gravitational waves may have become very strong.
And they may have even created standing wave patterns where the gravitational waves weren’t traveling but the waves stood still, almost frozen in place throughout the cosmos. Since gravitational waves are literally waves of gravity, the places where the waves are the strongest represent an exceptional amount of gravitational energy.
The researchers found that this could have major consequences for the electromagnetic field existing in the early universe at that time. The regions of intense gravity may have excited the electromagnetic field enough to release some of its energy in the form of radiation, creating light.
This result gives rise to an entirely new phenomenon: the production of light from gravity alone. There’s no situation in the present-day universe that could allow this process to happen, but the researchers have shown that the early universe was a far stranger place than we could possibly imagine.
The experiment relies on materials that can change their optical properties in fractions of a second, which could be used in new technologies or to explore fundamental questions in physics.
The original double-slit experiment, performed in 1801 by Thomas Young at the Royal Institution, showed that light acts as a wave. Further experiments, however, showed that light actually behaves as both a wave and as particles – revealing its quantum nature.
These experiments had a profound impact on quantum physics, revealing the dual particle and wave nature of not just light, but other ‘particles’ including electrons, neutrons, and whole atoms.
Now, a team led by Imperial College London physicists has performed the experiment using ‘slits’ in time rather than space. They achieved this by firing light through a material that changes its properties in femtoseconds (quadrillionths of a second), only allowing light to pass through at specific times in quick succession.
Lead researcher Professor Riccardo Sapienza, from the Department of Physics at Imperial, said: “Our experiment reveals more about the fundamental nature of light while serving as a stepping-stone to creating the ultimate materials that can minutely control light in both space and time.”
Details of the experiment are published today in Nature Physics.
[…]
The material the team used was a thin film of indium-tin-oxide, which forms most mobile phone screens. The material had its reflectance changed by lasers on ultrafast timescales, creating the ‘slits’ for light. The material responded much quicker than the team expected to the laser control, varying its reflectivity in a few femtoseconds.
The material is a metamaterial – one that is engineered to have properties not found in nature. Such fine control of light is one of the promises of metamaterials, and when coupled with spatial control, could create new technologies and even analogues for studying fundamental physics phenomena like black holes.
Co-author Professor Sir John Pendry said: “The double time slits experiment opens the door to a whole new spectroscopy capable of resolving the temporal structure of a light pulse on the scale of one period of the radiation.”
The team next want to explore the phenomenon in a ‘time crystal’, which is analogous to an atomic crystal, but where the optical properties vary in time.
Co-author Professor Stefan Maier said: “The concept of time crystals has the potential to lead to ultrafast, parallelized optical switches.”
What does a stressed plant sound like? A bit like bubble-wrap being popped. Researchers in Israel report in the journal Cell on March 30 that tomato and tobacco plants that are stressed—from dehydration or having their stems severed—emit sounds that are comparable in volume to normal human conversation. The frequency of these noises is too high for our ears to detect, but they can probably be heard by insects, other mammals, and possibly other plants.
“Even in a quiet field, there are actually sounds that we don’t hear, and those sounds carry information,” says senior author Lilach Hadany, an evolutionary biologist and theoretician at Tel Aviv University. “There are animals that can hear these sounds, so there is the possibility that a lot of acoustic interaction is occurring.”
Although ultrasonic vibrations have been recorded from plants before, this is the first evidence that they are airborne, a fact that makes them more relevant for other organisms in the environment. “Plants interact with insects and other animals all the time, and many of these organisms use sound for communication, so it would be very suboptimal for plants to not use sound at all,” says Hadany.
The researchers used microphones to record healthy and stressed tomato and tobacco plants, first in a soundproofed acoustic chamber and then in a noisier greenhouse environment. They stressed the plants via two methods: by not watering them for several days and by cutting their stems. After recording the plants, the researchers trained a machine-learning algorithm to differentiate between unstressed plants, thirsty plants, and cut plants.
The team found that stressed plants emit more sounds than unstressed plants. The plant sounds resemble pops or clicks, and a single stressed plant emits around 30–50 of these clicks per hour at seemingly random intervals, but unstressed plants emit far fewer sounds. “When tomatoes are not stressed at all, they are very quiet,” says Hadany.
00:00
00:36
An audio recording of plant sounds. The frequency was lowered so that it is audible to human ears. Credit: Khait et al.
Water-stressed plants began emitting noises before they were visibly dehydrated, and the frequency of sounds peaked after five days with no water before decreasing again as the plants dried up completely. The types of sound emitted differed with the cause of stress. The machine-learning algorithm was able to accurately differentiate between dehydration and stress from cutting and could also discern whether the sounds came from a tomato or tobacco plant.
Although the study focused on tomato and tobacco plants because of their ease to grow and standardize in the laboratory, the research team also recorded a variety of other plant species. “We found that many plants—corn, wheat, grape, and cactus plants, for example—emit sounds when they are stressed,” says Hadany.
A photo of a cactus being recorded. Credit: Itzhak Khait
The exact mechanism behind these noises is unclear, but the researchers suggest that it might be due to the formation and bursting of air bubbles in the plant’s vascular system, a process called cavitation.
Whether or not the plants are producing these sounds in order to communicate with other organisms is also unclear, but the fact that these sounds exist has big ecological and evolutionary implications. “It’s possible that other organisms could have evolved to hear and respond to these sounds,” says Hadany. “For example, a moth that intends to lay eggs on a plant or an animal that intends to eat a plant could use the sounds to help guide their decision.”
Other plants could also be listening in and benefiting from the sounds. We know from previous research that plants can respond to sounds and vibrations: Hadany and several other members of the team previously showed that plants increase the concentration of sugar in their nectar when they “hear” the sounds made by pollinators, and other studies have shown that plants change their gene expression in response to sounds. “If other plants have information about stress before it actually occurs, they could prepare,” says Hadany.
An illustration of a dehydrated tomato plant being recorded using a microphone. Credit: Liana Wait
Sound recordings of plants could be used in agricultural irrigation systems to monitor crop hydration status and help distribute water more efficiently, the authors say.
“We know that there’s a lot of ultrasound out there—every time you use a microphone, you find that a lot of stuff produces sounds that we humans cannot hear—but the fact that plants are making these sounds opens a whole new avenue of opportunities for communication, eavesdropping, and exploitation of these sounds,” says co-senior author Yossi Yovel, a neuro-ecologist at Tel Aviv University.
“So now that we know that plants do emit sounds, the next question is—’who might be listening?'” says Hadany. “We are currently investigating the responses of other organisms, both animals and plants, to these sounds, and we’re also exploring our ability to identify and interpret the sounds in completely natural environments.”
This is not a newly discovered phenomenon, plants, grasses and trees are very good at detecting and warning, eg
Stressed plants show altered phenotypes, including changes in color, smell, and shape. Yet, the possibility that plants emit airborne sounds when stressed – similarly to many animals – has not been investigated. Here we show, to our knowledge for the first time, that stressed plants emit airborne sounds that can be recorded remotely, both in acoustic chambers and in greenhouses. We recorded ∼65 dBSPL ultrasonic sounds 10 cm from tomato and tobacco plants, implying that these sounds could be detected by some organisms from up to several meters away. We developed machine learning models that were capable of distinguishing between plant sounds and general noises, and identifying the condition of the plants – dry, cut, or intact – based solely on the emitted sounds. Our results suggest that animals, humans, and possibly even other plants, could use sounds emitted by a plant to gain information about the plant’s condition. More investigation on plant bioacoustics in general and on sound emission in plants in particular may open new avenues for understanding plants and their interactions with the environment, and it may also have a significant impact on agriculture.
Source: Plants emit informative airborne sounds under stress
The remarkable ability of plants to respond to their environment has led some scientists to believe it’s a sign of conscious awareness. A new opinion paper argues against this position, saying plants “neither possess nor require consciousness.”
To explain these apparent behaviors, a subset of scientists known as plant neurobiologists has argued that plants possess a form of consciousness. Most notably, evolutionary ecologist Monica Gagliano has performed experiments that allegedly hint at capacities such as habituation (learning from experience) and classical conditioning (like Pavlov’s salivating dogs). In these experiments, plants apparently “learned” to stop curling their leaves after being dropped repeatedly or to spread their leaves in anticipation of a light source. Armed with this experimental evidence, Gagliano and others have claimed, quite controversially, that because plants can learn and exhibit other forms of intelligence, they must be conscious.
Nonsense, argues a new paper published today in Trends in Plant Science. The lead author of the new paper, biologist Lincoln Taiz from the University of California at Santa Cruz, isn’t denying plant intelligence, but makes a strong case against their being conscious.
Ultrasonic acoustic emission (UAE) in trees is often related to collapsing water columns in the flow path as a result of tensions that are too strong (cavitation). However, in a decibel (dB) range below that associated with cavitation, a close relationship was found between UAE intensities and stem radius changes.
UAE was continuously recorded on the stems of mature field-grown trees of Scots pine (Pinus sylvestris) and pubescent oak (Quercus pubescens) at a dry inner-Alpine site in Switzerland over two seasons. The averaged 20-Hz records were related to microclimatic conditions in air and soil, sap-flow rates and stem-radius fluctuations de-trended for growth (ΔW).•
Within a low-dB range (27 ± 1 dB), UAE regularly increased and decreased in a diurnal rhythm in parallel with ΔW on cloudy days and at night. These low-dB emissions were interrupted by UAE abruptly switching between the low-dB range and a high-dB range (36 ± 1 dB) on clear, sunny days, corresponding to the widely supported interpretation of UAE as sound from cavitations.•
It is hypothesized that the low-dB signals in drought-stressed trees are caused by respiration and/or cambial growth as these physiological activities are tissue water-content dependent and have been shown to produce courses of CO2 efflux similar to our courses of ΔW and low-dB UAE.
A mammoth meatball has been created by a cultivated meat company, resurrecting the flesh of the long-extinct animals.
The project aims to demonstrate the potential of meat grown from cells, without the slaughter of animals, and to highlight the link between large-scale livestock production and the destruction of wildlife and the climate crisis.
The mammoth meatball was produced by Vow Food, an Australian company, which is taking a different approach to cultured meat.
There are scores of companies working on replacements for conventional meat, such as chicken, pork and beef. But Vow Foods is aiming to mix and match cells from unconventional species to create new kinds of meat.
The company has already investigated the potential of more than 50 species, including alpaca, buffalo, crocodile, kangaroo, peacocks and different types of fish.
The first cultivated meat to be sold to diners will be Japanese quail, which the company expects will be in restaurants in Singapore this year.
“We have a behaviour change problem when it comes to meat consumption,” said George Peppou, CEO of Vow Food.
“The goal is to transition a few billion meat eaters away from eating [conventional] animal protein to eating things that can be produced in electrified systems.
“And we believe the best way to do that is to invent meat. We look for cells that are easy to grow, really tasty and nutritious, and then mix and match those cells to create really tasty meat.”
Tim Noakesmith, who cofounded Vow Food with Peppou, said: “We chose the woolly mammoth because it’s a symbol of diversity loss and a symbol of climate change.” The creature is thought to have been driven to extinction by hunting by humans and the warming of the world after the last ice age.
[…]
Cultivated meat – chicken from Good Meat – is currently only sold to consumers in Singapore, but two companies have now passed an approval process in the US.
Vow Food worked with Prof Ernst Wolvetang, at the Australian Institute for Bioengineering at the University of Queensland, to create the mammoth muscle protein. His team took the DNA sequence for mammoth myoglobin, a key muscle protein in giving meat its flavour, and filled in the few gaps using elephant DNA.
This sequence was placed in myoblast stem cells from a sheep, which replicated to grow to the 20bn cells subsequently used by the company to grow the mammoth meat.
“It was ridiculously easy and fast,” said Wolvetang. “We did this in a couple of weeks.” Initially, the idea was to produce dodo meat, he said, but the DNA sequences needed do not exist.
[…]
Seren Kell, at the Good Food Institute Europe, said: “I hope this fascinating project will open up new conversations about cultivated meat’s extraordinary potential to produce more sustainable food.
“However, as the most common sources of meat are farm animals such as cattle, pigs, and poultry, most of the sustainable protein sector is focused on realistically replicating meat from these species.
“By cultivating beef, pork, chicken and seafood we can have the most impact in terms of reducing emissions from conventional animal agriculture.”
Computer scientists found the holy grail of tiles. They call it the “einstein,” one shape that alone can cover a plane without ever repeating a pattern.
And all it takes for this special shape is 13 sides.
In the world of mathematics, an “aperiodic monotile”—also known as an einstein based off a German phrase for one stone—is a shape that can tile a plane, but never repeat.
“In this paper we present the first true aperiodic monotile, a shape that forces aperiodicity through geometry alone, with no additional constrains applied via matching conditions,” writes Craig Kaplan, a computer science professor from the University of Waterloo and one of the four authors of the paper. “We prove that this shape, a polykite that we call ‘the hat,’ must assemble into tilings based on a substitution system.”
[…]
The history of the aperiodic tile has never had a breakthrough like this one. The first aperiodic sets had over 20,000 tiles, Kaplan tweets. “Subsequent research lowered that number, to sets of size 92, then six, and then two in the form of the famous Penrose tiles.” But those Penrose tiles were from 1974.
[…]
The team proved the nature of the shape through computer coding, and in a fascinating aside, the shape doesn’t lose its aperiodic nature even when the length of sides changes.