Similar to a light switch, RNA switches (called riboswitches) determine which genes turn “on” and “off.” Although this may seem like a simple process, the inner workings of these switches have confounded biologists for decades.
Now researchers led by Northwestern University and the University at Albany discovered one part of RNA smoothly invades and displaces another part of the same RNA, enabling the structure to rapidly and dramatically change shape. Called “strand displacement,” this mechanism appears to switch genetic expression from “on” to “off.”
Using a simulation they launched last year, the researchers made this discovery by watching a slow-motion simulation of a riboswitch up close and in action. Affectionately called R2D2 (short for “reconstructing RNA dynamics from data”), the new simulation models RNA in three dimensions as it binds to a compound, communicates along its length and folds to turn a gene “on” or “off.”
[…]
“We have found this strand displacement mechanism occurring in other types of RNA molecules, indicating this might be a potential generality of RNA folding,” said Northwestern’s Julius B. Lucks, who co-led the study. “We are starting to find similarities among different types of RNA molecules, which could eventually lead to RNA design rules for folding and function.”
[…]
Although RNA folding takes place in the human body more than 10 quadrillion times per second—every time a gene is expressed in a cell—researchers know very little about the process. To help visualize and understand the mysterious yet crucial process, Lucks and Chen unveiled R2D2 last year, in a paper published in the journal Molecular Cell.
Credit: Northwestern University
Employing a technology platform developed in Lucks’ lab, R2D2 captures data related to RNA folding as the RNA is being made. Then, it uses computational tools to mine and organize the data, revealing points where the RNA folds and what happens after it folds. Angela Yu, a former student of Lucks, inputted this data into computer models to generate accurate videos of the folding process.
“What’s so groundbreaking about the R2D2 approach…is that it combines experimental data on RNA folding at the nucleotide level with predictive algorithms at the atomic level to simulate RNA folding in ultra-slow motion,” said Dr. Francis Collins, director of the National Institutes of Health, in his February 2021 blog. “While other computer simulations have been available for decades, they have lacked much-needed experimental data of this complex folding process to confirm their mathematical modeling.”
Extracts of okra and other slimy plants commonly used in cooking can help remove dangerous microplastics from wastewater, scientists said Tuesday.
The new research was presented at the spring meeting of the American Chemical Society, and offers an alternative to the synthetic chemicals currently used in treatment plants that can themselves pose risks to health.
“In order to go ahead and remove microplastic or any other type of materials, we should be using natural materials which are non-toxic,” lead investigator Rajani Srinivasan, of Tarleton State University, said in an explainer video.
[…]
Srinivasan’s past research had examined how the goo from okra and other plants could remove textile-based pollutants from water and even microorganisms, and she wanted to see if that would equally apply to microplastics.
[…]
Typical wastewater treatment removes microplastics in two steps.
First, those that float are skimmed off the top of the water. These however account for only a small fraction, and the rest are removed using flocculants, or sticky chemicals that attract microplastics into larger clumps.
The clumps sink to the bottom and can then be separated from the water.
The problem is that these synthetic flocculants, such as polyacrylamide, can break down into toxic chemicals.
[…]
They tested chains of carbohydrates, known as polysaccharides, from the individual plants, as well as in combination, on various microplastic-contaminated water, examining before and after microscopic images to determine how many particles had been removed.
They found that polysaccharides from okra paired with those from fenugreek could best remove microplastics from ocean water, while polysaccharides from okra paired with tamarind worked best in freshwater samples.
Overall, the plant-based polysaccharides worked just as well or better than polyacrylamide. Crucially, the plant-based chemicals are both non-toxic and can be used in existing treatment plants.
Leicester space scientists have discovered a never-before-seen mechanism fuelling huge planetary aurorae at Saturn.
Saturn is unique among planets observed to date in that some of its aurorae are generated by swirling winds within its own atmosphere, and not just from the planet’s surrounding magnetosphere.
At all other observed planets, including Earth, aurorae are only formed by powerful currents that flow into the planet’s atmosphere from the surrounding magnetosphere. These are driven by either interaction with charged particles from the Sun (as at the Earth) or volcanic material erupted from a moon orbiting the planet (as at Jupiter and Saturn).
This discovery changes scientists’ understanding of planetary aurorae and answers one of the first mysteries raised by NASA’s Cassini probe, which reached Saturn in 2004: why can’t we easily measure the length of a day on the Ringed Planet?
When it first arrived at Saturn, Cassini tried to measure the bulk rotation rate of the planet, that determines the length of its day, by tracking radio emission ‘pulses’ from Saturn’s atmosphere. To the great surprise of those making the measurements, they found that the rate appeared to have changed over the two decades since the last spacecraft to have flown past the planet—Voyager 2, also operated by NASA—in 1981.
Leicester Ph.D. researcher Nahid Chowdhury is a member of the Planetary Science Group within the School of Physics and Astronomy and corresponding author for the study, published in Geophysical Research Letters. He said:
“Saturn’s internal rotation rate has to be constant, but for decades researchers have shown that numerous periodic properties related to the planet—the very measurements we’ve used at other planets to understand the internal rotation rate, such as the radio emission—tend to change with time. What’s more, there are also independent periodic features seen in the northern and southern hemispheres which themselves vary over the course of a season on the planet.
“Our understanding of the physics of planetary interiors tells us the true rotation rate of the planet can’t change this quickly, so something unique and strange must be happening at Saturn. Several theories have been touted since the advent of the NASA Cassini mission trying to explain the mechanism/s behind these observed periodicities. This study represents the first detection of the fundamental driver, situated in the upper atmosphere of the planet, which goes on to generate both the observed planetary periodicities and aurorae.
Simplified figure showing the direction of winds within layers of Saturn’s atmosphere. Credit: Nahid Chowdhury/University of Leicester
“It’s absolutely thrilling to be able to provide an answer to one of the longest standing questions in our field. This is likely to initiate some rethinking about how local atmospheric weather effects on a planet impact the creation of aurorae, not just in our own Solar System but farther afield too.”
[…]
They measured infrared emissions from the gas giant’s upper atmosphere using the Keck Observatory in Hawai’i and mapped the varying flows of Saturn’s ionosphere, far below the magnetosphere, over the course of a month in 2017.
This map, when fixed against the known pulse of Saturn’s radio aurorae, showed that a significant proportion of the planet’s aurorae are generated by the swirling pattern of weather in its atmosphere and are responsible for the planet’s observed variable rate of rotation.
Researchers believe the system is driven by energy from Saturn’s thermosphere, with winds in the ionosphere observed between 0.3 and 3.0 kilometres per second.
[…]
recently, many researchers have focused on the possibility that it is Saturn’s upper atmosphere that causes this variability.
“This search for a new type of aurora harks back to some of the earliest theories about Earth’s aurora. We now know that aurorae on Earth are powered by interactions with the stream of charged particles driven from the Sun. But I love that the name Aurora Borealis originates from the ‘the Dawn of the Northern Wind’. These observations have revealed that Saturn has a true Aurora Borealis—the first ever aurora driven by the winds in the atmosphere of a planet.”
Dr. Kevin Baines, a JPL-Caltech-based co-author of the study and a member of the Cassini Science Team, added:
“Our study, by conclusively determining the origin of the mysterious variability in radio pulses, eliminates much of the confusion into Saturn’s bulk rotation rate and the length of the day on Saturn.”
Because of the variable rotation rates observed at Saturn, scientists have been prevented from using the regular pulse of radio emission to calculate the bulk internal rotation rate. Fortunately, a novel method was developed by Cassini scientists using gravity-induced perturbations in Saturn’s complex ring system, which now seems to be the most accurate means of measuring the planet’s bulk rotational period, which was determined in 2019 to be 10 hours, 33 minutes and 38 seconds.
A team of researchers from Beihang University, the Peking University School and Hospital of Stomatology and the Michigan Institute of Translational Nanotechnology has developed a synthetic enamel with properties similar to natural tooth enamel. In their paper published in the journal Science, the group describes their enamel and how well it compared to natural enamel when tested.
[…]
Prior research has shown that the reason that human enamel is so strong and yet also slightly elastic is because it consists of tiny rods made of calcium that are packed tightly together like pencils in a box. In their new effort, the researchers attempted to mimic tooth enamel as closely as possible by producing a material using AIP-coated hydroxyapatite nanowires that were aligned in parallel using a freezing technique that involved applying polyvinyl alcohol.
The researchers applied the enamel to a variety of shapes, including human teeth, and then tested how well it performed. They found it had a high degree of stiffness, was strong and was also slightly elastic. They also found that on most of their tests, the synthetic enamel outperformed natural enamel.
The researchers plan to keep testing their material to make sure it will hold up under harsh environments such as those found in the human mouth. They will also have to show that it is safe for use in humans and that it can be mass produced. They note that if their enamel passes all such tests, it could be used in more than just dentistry—they suggest it could be used to coat pacemakers, for example, or to shore up bones that have been damaged or that have eroded due to use or disease.
Researchers were able to identify 74 species of animals by looking for DNA in air samples collected at two zoos. The experiment shows that free-floating DNA could be used to track wild animals, including endangered or invasive species, without needing to observe them directly.
Environmental DNA (eDNA) has shaken up how animal populations can be monitored, managed, and conserved. Instead of having to find physical evidence of animals—scales, fur, feces, or sightings—researchers can rely on the microscopic bits of genetic material that fall off creatures as they move around their environment. Merely taking a soil or water sample can give researchers a sense of an entire ecosystem.
But researchers have wondered whether air could provide the same level of information as soil and water. Last year, a UK-based team detected naked mole rat DNA by sampling air from the rodents’ burrows in a lab setting. (They also detected human DNA, presumably from the researchers who worked in the lab.) But proving the method’s success in open air was a different beast. To test the technique further, two research teams used a setting that included unmistakeable subjects: zoos in England and Denmark. Their twopapers are published today in Current Biology.
[…]
To run their experiment, the scientists used a fan with a filter, drawing in air from within and around the zoo. The team then used polymerase chain reaction (PCR)—the same tech used in many covid-19 tests—to amplify the genetic information on the filter, essentially creating many copies of the genetic material they found. They were able to identify 25 species in the UK and 49 species in Denmark. In the UK study, eight of the identified species were animals native to the area rather than zoo inhabitants, while six non-zoo animals were detected in the Denmark study.
Elizabeth Clare, author of one of the studies, samples air for environmental DNA.Photo: Elizabeth Clare
[…]
The closer to extinction a species creeps, the harder it is for it to be monitored. eDNA methods make that conservation work easier. It means keeping track of the last vaquitas and perhaps settling the debate over the fate of the ivory-billed woodpecker.
Airborne DNA still requires more research, but Clare noted how quickly waterborne DNA became a widely used method in conservation. Perhaps the latest innovation in DNA surveys will happen sooner than we think.
[…] Researchers at the biotechnology startup Cortical Labs have created “mini-brains“ consisting of 800,000 to one million living human brain cells in a petri dish, New Scientist reports. The cells are placed on top of a microelectrode array that analyzes the neural activity.
[…]
To teach the mini-brains the game, the team created a simplified version of “Pong” with no opponent. A signal is sent to either the right or left of the array to indicate where the ball is, and the neurons from the brain cells send signals back to move the paddle.
“We often refer to them as living in the Matrix,” Kagan told the magazine, in a horrifyingly reference to the 1999 movie in which humans are enslaved by AI overlords in an all-encompassing simulation. “When they are in the game, they believe they are the paddle.”
Well, that’s a scary enough concept to cause some existential panic for anyone.
Faster Than AI
Kagan said that while the mini-brains can’t play the game as well as a human, they do learn faster than some AIs.
“The amazon aspect is how quickly it learns, in five minutes, in real time,” he told New Scientist. “That’s really an amazing thing that biology can do.”
While this is certainly some amazing Twitch fodder, the team at Cortical Labs hope to use their findings to develop sophisticated technology using “live biological neurons integrated with traditional silicon computing,” according to the outfit’s website.
New research from the University of Massachusetts Amherst provides a novel answer to one of the persistent questions in historical climatology, environmental history and the earth sciences: what caused the Little Ice Age? The answer, we now know, is a paradox: warming.
The Little Ice Age was one of the coldest periods of the past 10,000 years, a period of cooling that was particularly pronounced in the North Atlantic region. This cold spell, whose precise timeline scholars debate, but which seems to have set in around 600 years ago, was responsible for crop failures, famines and pandemics throughout Europe, resulting in misery and death for millions. To date, the mechanisms that led to this harsh climate state have remained inconclusive. However, a new paper published recently in Science Advances gives an up-to-date picture of the events that brought about the Little Ice Age. Surprisingly, the cooling appears to have been triggered by an unusually warm episode.
When lead author Francois Lapointe, postdoctoral researcher and lecturer in geosciences at UMass Amherst and Raymond Bradley, distinguished professor in geosciences at UMass Amherst began carefully examining their 3,000-year reconstruction of North Atlantic sea surface temperatures, results of which were published in the Proceedings of the National Academy of Sciences in 2020, they noticed something surprising: a sudden change from very warm conditions in the late 1300s to unprecedented cold conditions in the early 1400s, only 20 years later.
Using many detailed marine records, Lapointe and Bradley discovered that there was an abnormally strong northward transfer of warm water in the late 1300s which peaked around 1380. As a result, the waters south of Greenland and the Nordic Seas became much warmer than usual. “No one has recognized this before,” notes Lapointe.
Normally, there is always a transfer of warm water from the tropics to the arctic. It’s a well-known process called the Atlantic Meridional Overturning Circulation (AMOC), which is like a planetary conveyor belt. Typically, warm water from the tropics flows north along the coast of Northern Europe, and when it reaches higher latitudes and meets colder arctic waters, it loses heat and becomes denser, causing the water to sink at the bottom of the ocean. This deep-water formation then flows south along the coast of North America and continues on to circulate around the world.
But in the late 1300s, AMOC strengthened significantly, which meant that far more warm water than usual was moving north, which in turn cause rapid arctic ice loss. Over the course of a few decades in the late 1300s and 1400s, vast amounts of ice were flushed out into the North Atlantic, which not only cooled the North Atlantic waters, but also diluted their saltiness, ultimately causing AMOC to collapse. It is this collapse that then triggered a substantial cooling.
Fast-forward to our own time: between the 1960s and 1980s, we have also seen a rapid strengthening of AMOC, which has been linked with persistently high pressure in the atmosphere over Greenland. Lapointe and Bradley think the same atmospheric situation occurred just prior to the Little Ice Age—but what could have set off that persistent high-pressure event in the 1380s?
The answer, Lapointe discovered, is to be found in trees. Once the researchers compared their findings to a new record of solar activity revealed by radiocarbon isotopes preserved in tree rings, they discovered that unusually high solar activity was recorded in the late 1300s. Such solar activity tends to lead to high atmospheric pressure over Greenland.
At the same time, fewer volcanic eruptions were happening on earth, which means that there was less ash in the air. A “cleaner” atmosphere meant that the planet was more responsive to changes in solar output. “Hence the effect of high solar activity on the atmospheric circulation in the North-Atlantic was particularly strong,” said Lapointe.
Lapointe and Bradley have been wondering whether such an abrupt cooling event could happen again in our age of global climate change. They note that there is now much less arctic sea ice due to global warming, so an event like that in the early 1400s, involving sea ice transport, is unlikely. “However, we do have to keep an eye on the build-up of freshwater in the Beaufort Sea (north of Alaska) which has increased by 40% in the past two decades. Its export to the subpolar North Atlantic could have a strong impact on oceanic circulation”, said Lapointe. “Also, persistent periods of high pressure over Greenland in summer have been much more frequent over the past decade and are linked with record-breaking ice melt. Climate models do not capture these events reliably and so we may be underestimating future ice loss from the ice sheet, with more freshwater entering the North Atlantic, potentially leading to a weakening or collapse of the AMOC.” The authors conclude that there is an urgent need to address these uncertainties.
[…] It’s estimated that a quarter of the world’s population is affected by the condition known as presbyopia, which is one of the many unfortunate side effects of aging that typically starts affecting people in their 40s. The condition limits a person’s ability to focus on nearby objects, such as small print
[…]
he use of eye drops once every morning.
The active ingredient in Vuity is pilocarpine, which is often used to treat dry mouth because it stimulates the production of saliva, but it also causes the eye to reduce the size of the pupil’s opening. Like reducing the size of the aperture on a camera, this increases the eye’s depth of field, resulting in more of what’s seen being in focus, including close-up objects.
In human studies where a total of 750 participants aged 40-55 diagnosed with presbyopia were either given Vuity or a placebo, those using the Vuity eye drops gained the ability to read three or more additional lines of text on an optometrist’s reading chart (where each subsequent line contains smaller and smaller samples of text) and maintain those improvements after 30 days of use without affecting distance vision. However, Vuity was found to be considerably less helpful for patients over 65, who would need to rely on more traditional approaches to correcting vision issues.
The studies were conducted three hours after doses were administered, and it takes about that long for the full effect of Vuity to kick in, but the effect typically lasts for about a full day, which means the eye drops really only need to be applied once every morning. A reduction in pupil size does mean less light is entering the eye and hitting the retina, but it shouldn’t have an effect on users’ vision, given the eye’s impressive ability to adapt to changing lighting conditions.
A research team at City University of Hong Kong (CityU) has discovered a new type of sound wave: The airborne sound wave vibrates transversely and carries both spin and orbital angular momentum like light does. The findings shattered scientists’ previous beliefs about the sound wave, opening an avenue to the development of novel applications in acoustic communications, acoustic sensing and imaging.
The research was initiated and co-led by Dr. Shubo Wang, Assistant Professor in the Department of Physics at CityU, and conducted in collaboration with scientists from Hong Kong Baptist University (HKBU) and the Hong Kong University of Science and Technology (HKUST). It was published in Nature Communications, titled “Spin-orbit interactions of transverse sound.”
Beyond the conventional understanding of sound wave
The physics textbooks tell us there are two kinds of waves. In transverse waves like light, the vibrations are perpendicular to the direction of wave propagation. In longitudinal waves like sound, the vibrations are parallel to the direction of wave propagation. But the latest discovery by scientists from CityU changes this understanding of sound waves.
“While the airborne sound is a longitudinal wave in usual cases, we demonstrated for the first time that it can be a transverse wave under certain conditions. And we investigated its spin-orbit interactions (an important property only exists in transverse waves), i.e. the coupling between two types of angular momentum. The finding provides new degrees of freedom for sound manipulations.”
The absence of shear force in the air, or fluids, is the reason why sound is a longitudinal wave, Dr. Wang explained. He had been exploring whether it is possible to realize transverse sound, which requires shear force. Then he conceived the idea that synthetic shear force may arise if the air is discretized into “meta-atoms,” i.e., volumetric air confined in small resonators with size much smaller than the wavelength. The collective motion of these air “meta-atoms” can give rise to a transverse sound on the macroscopic scale.
Negative refraction induced by the spin-orbit interaction in momentum space. Credit: S. Wang et al. DOI: 10.1038/s41467-021-26375-9
Conception and realization of ‘micropolar metamaterial’
He ingeniously designed a type of artificial material called “micropolar metamaterial” to implement this idea, which appears like a complex network of resonators. Air is confined inside these mutually connected resonators, forming the “meta-atoms.” The metamaterial is hard enough so that only the air inside can vibrate and support sound propagation. The theoretical calculations showed that the collective motion of these air “meta-atoms” indeed produces the shear force, which gives rise to the transverse sound with spin-orbit interactions inside this metamaterial. This theory was verified by experiments conducted by Dr. Ma Guancong’s group in HKBU.
Moreover, the research team discovered that air behaves like an elastic material inside the micropolar metamaterial and thus supports transverse sound with both spin and orbital angular momentum. Using this metamaterial, they demonstrated two types of spin-orbit interactions of sound for the first time. One is the momentum-space spin-orbit interaction, which gives rise to negative refraction of the transverse sound, meaning that sound bends in the opposite directions when passing through an interface. Another one is the real-space spin-orbit interaction, which generates sound vortices under the excitation of the transverse sound.
Warp drive pioneer and former NASA warp drive specialist Dr. Harold G “Sonny” White has reported the successful manifestation of an actual, real-world “Warp Bubble.” And, according to White, this first of its kind breakthrough by his Limitless Space Institute (LSI) team sets a new starting point for those trying to manufacture a full-sized, warp-capable spacecraft.
“To be clear, our finding is not a warp bubble analog, it is a real, albeit humble and tiny, warp bubble,” White told The Debrief, quickly dispensing with the notion that this is anything other than the creation of an actual, real-world warp bubble. “Hence the significance.”
[…]
“While conducting analysis related to a DARPA-funded project to evaluate possible structure of the energy density present in a Casimir cavity as predicted by the dynamic vacuum model,” reads the actual findings published in the peer-reviewed European Physical Journal, “a micro/nano-scale structure has been discovered that predicts negative energy density distribution that closely matches requirements for the Alcubierre metric.”
Or put more simply, as White did in a recent email to The Debrief, “To my knowledge, this is the first paper in the peer-reviewed literature that proposes a realizable nano-structure that is predicted to manifest a real, albeit humble, warp bubble.”
This fortuitous finding, says White, not only confirms the predicted “toroidal” structure and negative energy aspects of a warp bubble, but also resulted in potential pathways he and other researchers can follow when trying to design, and one day actually construct, a real-world warp-capable spacecraft.
[…]
“This is a potential structure we can propose to the community that one could build that will generate a negative vacuum energy density distribution that is very similar to what’s required for an Alcubierre space warp.”
When asked by The Debrief in December if his team has built and tested this proposed nano-scale warp craft design since that August announcement, or if they have plans to do so, White said, “We have not manufactured the one-micron sphere in the middle of a 4-micron cylinder.” However, he noted, if the LSI team were to undertake that at some point, “we’d probably use a nanoscribe GT 3D printer that prints at the nanometer scale.” In short, they have the means, now they just need the opportunity.
[…]
White and his team have also outlined a second testable experiment that involves stringing a number of these Casimir-created warp bubbles in a chain-like configuration. This design, he said, would allow researchers to better understand the physics of the warp bubble structure already created, as well as how a craft may one day traverse actual space inside such a warp bubble.
“We could go through an examination of the optical properties as a result of these little, nano-scale warp bubbles,” explained White at the AIAA conference. “Aggregating a large number of them in a row, we can increase the magnitude of the effect so we can see (and study) it.”
To understand how the pandemic is evolving, it’s crucial to know how death rates from COVID-19 are affected by vaccination status. The death rate is a key metric that can accurately show us how effective vaccines are against severe forms of the disease. This may change over time when there are changes in the prevalence of COVID-19, and because of factors such as waning immunity, new strains of the virus, and the use of boosters.
On this page, we explain why it is essential to look at death rates by vaccination status rather than the absolute number of deaths among vaccinated and unvaccinated people.
We also visualize this mortality data for the United States, England, and Chile.
Ideally we would produce a global dataset that compiles this data for countries around the world, but we do not have the capacity to do this in our team. As a minimum, we list country-specific sources where you can find similar data for other countries, and we describe how an ideal dataset would be formatted.
Why we need to compare the rates of death between vaccinated and unvaccinated
During a pandemic, you might see headlines like “Half of those who died from the virus were vaccinated”.
It would be wrong to draw any conclusions about whether the vaccines are protecting people from the virus based on this headline. The headline is not providing enough information to draw any conclusions.
Let’s think through an example to see this.
Imagine we live in a place with a population of 60 people.
Then we learn that of the 10 who died from the virus, 50% were vaccinated.
The newspaper may run the headline “Half of those who died from the virus were vaccinated”. But this headline does not tell us anything about whether the vaccine is protecting people or not.
To be able to say anything, we also need to know about those who did not die: how many people in this population were vaccinated? And how many were not vaccinated?
Now we have all the information we need and can calculate the death rates:
of 10 unvaccinated people, 5 died → the death rate among the unvaccinated is 50%
of 50 vaccinated people, 5 died → the death rate among the vaccinated is 10%
We therefore see that the death rate among the vaccinated is 5-times lower than among the unvaccinated.
In the example, we invented numbers to make it simple to calculate the death rates. But the same logic applies also in the current COVID-19 pandemic. Comparisons of the absolute numbers, as some headlines do, is making a mistake that’s known in statistics as a ‘base rate fallacy’: it ignores the fact that one group is much larger than the other. It is important to avoid this mistake, especially now, as in more and more countries the number of people who are vaccinated against COVID-19 is much larger than the number of people who are unvaccinated (see our vaccination data).
This example was illustrating how to think about these statistics in a hypothetical case. Below, you can find the real data for the situation in the COVID-19 pandemic now.
A team of researchers from the University of Alabama, the University of Melbourne and the University of California has found that social scientists are able to change their beliefs regarding the outcome of an experiment when given the chance. In a paper published in the journal Nature Human Behavior, the group describes how they tested the ability of scientists to change their beliefs about a scientific idea when shown evidence of replicability. Michael Gordon and Thomas Pfeifer with Massey University have published a News & Views piece in the same journal issue explaining why scientists must be able to update their beliefs.
The researchers set out to study a conundrum in science. It is generally accepted that scientific progress can only be made if scientists update their beliefs when new ideas come along. The conundrum is that scientists are human beings and human beings are notoriously difficult to sway from their beliefs. To find out if this might be a problem in general science endeavors, the researchers created an environment that allowed for testing the possibility.
The work involved sending out questionnaires to 1,100 social scientists asking them how they felt about the outcome of several recent well-known studies. They then conducted replication efforts on those same studies to determine if they could reproduce the findings by the researchers in the original efforts. They then sent the results of their replication efforts to the social scientists who had been queried prior to their effort, and once again asked them how they felt about the results of the original team.
In looking at their data, and factoring out related biases, they found that most of those scientists that participated lost some confidence in the results of studies when the researchers could not replicate results and gained some confidence in them when they could. The researchers suggest that this indicates that scientists, at least those in social fields, are able to rise above their beliefs when faced with scientific evidence, ensuring that science is indeed allowed to progress, despite it being conducted by fallible human beings.
Suppose you are trying to transmit a message. Convert each character into bits, and each bit into a signal. Then send it, over copper or fiber or air. Try as you might to be as careful as possible, what is received on the other side will not be the same as what you began with. Noise never fails to corrupt.
In the 1940s, computer scientists first confronted the unavoidable problem of noise. Five decades later, they came up with an elegant approach to sidestepping it: What if you could encode a message so that it would be obvious if it had been garbled before your recipient even read it? A book can’t be judged by its cover, but this message could.
They called this property local testability, because such a message can be tested super-fast in just a few spots to ascertain its correctness. Over the next 30 years, researchers made substantial progress toward creating such a test, but their efforts always fell short. Many thought local testability would never be achieved in its ideal form.
Now, in a preprint released on November 8, the computer scientist Irit Dinur of the Weizmann Institute of Science and four mathematicians, Shai Evra, Ron Livne, Alex Lubotzky and Shahar Mozes, all at the Hebrew University of Jerusalem, have found it.
[…]
Their new technique transforms a message into a super-canary, an object that testifies to its health better than any other message yet known. Any corruption of significance that is buried anywhere in its superstructure becomes apparent from simple tests at a few spots.
“This is not something that seems plausible,” said Madhu Sudan of Harvard University. “This result suddenly says you can do it.”
[…]
To work well, a code must have several properties. First, the codewords in it should not be too similar: If a code contained the codewords 0000 and 0001, it would only take one bit-flip’s worth of noise to confuse the two words. Second, codewords should not be too long. Repeating bits may make a message more durable, but they also make it take longer to send.
These two properties are called distance and rate. A good code should have both a large distance (between distinct codewords) and a high rate (of transmitting real information).
[…]
To understand why testability is so hard to obtain, we need to think of a message not just as a string of bits, but as a mathematical graph: a collection of vertices (dots) connected by edges (lines).
[…]
Hamming’s work set the stage for the ubiquitous error-correcting codes of the 1980s. He came up with a rule that each message should be paired with a set of receipts, which keep an account of its bits. More specifically, each receipt is the sum of a carefully chosen subset of bits from the message. When this sum has an even value, the receipt is marked 0, and when it has an odd value, the receipt is marked 1. Each receipt is represented by one single bit, in other words, which researchers call a parity check or parity bit.
Hamming specified a procedure for appending the receipts to a message. A recipient could then detect errors by attempting to reproduce the receipts, calculating the sums for themselves. These Hamming codes work remarkably well, and they are the starting point for seeing codes as graphs and graphs as codes.
[…]
Expander graphs are distinguished by two properties that can seem contradictory. First, they are sparse: Each node is connected to relatively few other nodes. Second, they have a property called expandedness — the reason for their name — which means that no set of nodes can be bottlenecks that few edges pass through. Each node is well connected to other nodes, in other words — despite the scarcity of the connections it has.
[…]
However, choosing codewords completely at random would make for an unpredictable dictionary that was excessively hard to sort through. In other words, Shannon showed that good codes exist, but his method for making them didn’t work well.
[…]
However, local testability was not possible. Suppose that you had a valid codeword from an expander code, and you removed one receipt, or parity bit, from one single node. That would constitute a new code, which would have many more valid codewords than the first code, since there would be one less receipt they needed to satisfy. For someone working off the original code, those new codewords would satisfy the receipts at most nodes — all of them, except the one where the receipt was erased. And yet, because both codes have a large distance, the new codeword that seems correct would be extremely far from the original set of codewords. Local testability was simply incompatible with expander codes.
[…]
Local testability was achieved by 2007, but only at the cost of other parameters, like rate and distance. In particular, these parameters would degrade as a codeword became large. In a world constantly seeking to send and store larger messages, these diminishing returns were a major flaw.
[…]
But in 2017, a new source of ideas emerged. Dinur and Lubotzky began working together while attending a yearlong research program at the Israel Institute for Advanced Studies. They came to believe that a 1973 result by the mathematician Howard Garland might hold just what computer scientists sought. Whereas ordinary expander graphs are essentially one-dimensional structures, with each edge extending in only one direction, Garland had created a mathematical object that could be interpreted as an expander graph that spanned higher dimensions, with, for example, the graph’s edges redefined as squares or cubes.
Garland’s high-dimensional expander graphs had properties that seemed ideal for local testability. They must be deliberately constructed from scratch, making them a natural antithesis of randomness. And their nodes are so interconnected that their local characteristics become virtually indistinguishable from how they look globally.
[…]
In their new work, the authors figured out how to assemble expander graphs to create a new graph that leads to the optimal form of locally testable code. They call their graph a left-right Cayley complex.
As in Garland’s work, the building blocks of their graph are no longer one-dimensional edges, but two-dimensional squares. Each information bit from a codeword is assigned to a square, and parity bits (or receipts) are assigned to edges and corners (which are nodes). Each node therefore defines the values of bits (or squares) that can be connected to it.
To get a sense of what their graph looks like, imagine observing it from the inside, standing on a single edge. They construct their graph such that every edge has a fixed number of squares attached. Therefore, from your vantage point you’d feel as if you were looking out from the spine of a booklet. However, from the other three sides of the booklet’s pages, you’d see the spines of new booklets branching from them as well. Booklets would keep branching out from each edge ad infinitum.
“It’s impossible to visualize. That’s the whole point,” said Lubotzky. “That’s why it is so sophisticated.”
Crucially, the complicated graph also shares the properties of an expander graph, like sparseness and connectedness, but with a much richer local structure. For example, an observer sitting at one vertex of a high-dimensional expander could use this structure to straightforwardly infer that the entire graph is strongly connected.
“What’s the opposite of randomness? It’s structure,” said Evra. “The key to local testability is structure.”
To see how this graph leads to a locally testable code, consider that in an expander code, if a bit (which is an edge) is in error, that error can only be detected by checking the receipts at its immediately neighboring nodes. But in a left-right Cayley complex, if a bit (a square) is in error, that error is visible from multiple different nodes, including some that are not even connected to each other by an edge.
In this way, a test at one node can reveal information about errors from far away nodes. By making use of higher dimensions, the graph is ultimately connected in ways that go beyond what we typically even think of as connections.
In addition to testability, the new code maintains rate, distance and other desired properties, even as codewords scale, proving the c3 conjecture true. It establishes a new state of the art for error-correcting codes, and it also marks the first substantial payoff from bringing the mathematics of high-dimensional expanders to bear on codes.
To grow and spread, cancer cells must evade the immune system. Investigators from Brigham and Women’s Hospital and MIT used the power of nanotechnology to discover a new way that cancer can disarm its would-be cellular attackers by extending out nanoscale tentacles that can reach into an immune cell and pull out its powerpack. Slurping out the immune cell’s mitochondria powers up the cancer cell and depletes the immune cell. The new findings, published in Nature Nanotechnology, could lead to new targets for developing the next generation of immunotherapy against cancer.
“Cancer kills when the immune system is suppressed and cancer cells are able to metastasize, and it appears that nanotubes can help them do both,” said corresponding author Shiladitya Sengupta, PhD, co-director of the Brigham’s Center for Engineered Therapeutics. “This is a completely new mechanism by which cancer cells evade the immune system and it gives us a new target to go after.”
To investigate how cancer cells and immune cells interact at the nanoscale level, Sengupta and colleagues set up experiments in which they co-cultured breast cancer cells and immune cells, such as T cells. Using field-emission scanning electron microscopy, they caught a glimpse of something unusual: Cancer cells and immune cells appeared to be physically connected by tiny tendrils, with widths mostly in the 100-1000 nanometer range. (For comparison, a human hair is approximately 80,000 to 100,000 nanometers). In some cases, the nanotubes came together to form thicker tubes. The team then stained mitochondria — which provide energy for cells — from the T cells with a fluorescent dye and watched as bright green mitochondria were pulled out of the immune cells, through the nanotubes, and into the cancer cells.
A pair of theoretical physicists, from the University of Exeter (United Kingdom) and the University of Zaragoza (Spain), have developed a quantum theory explaining how to engineer non-reciprocal flows of quantum light and matter. The research may be important for the creation of quantum technologies which require the directional transfer of energy and information at small scales.
Reciprocity, going the same way backward as forward, is a ubiquitous concept in physics. A famous example may be found in Newton’s Law: for every action there is an equal and opposite reaction. The breakdown of such a powerful notion as reciprocity in any area of physics, from mechanics to optics to electromagnetism, is typically associated with surprises which can be exploited for technological application. For example, a nonreciprocal electric diode allows current to pass in forwards but not backwards and forms a building block of the microelectronics industry.
In their latest research, Downing and Zueco provide a quantum theory of non-reciprocal transport around a triangular cluster of strongly interacting quantum objects. Inspired by the physics of quantum rings, they show that by engineering an artificial magnetic field one may tune the direction of the energy flow around the cluster. The theory accounts for strong particle interactions, such that directionality appears at a swathe of energies, and considers the pernicious effect of dissipation for the formation of non-reciprocal quantum currents.
The research may be useful in the development of quantum devices requiring efficient, directional transportation, as well for further studies of strongly interacting quantum phases, synthetic magnetic fields and quantum simulators.
Charles Downing from the University of Exeter explains: “Our calculations provide insight into how one may instigate directional transport in closed nanoscopic lattices of atoms and photons with strong interactions, which may lead to the development of novel devices of a highly directional character”.
“Non-reciprocal population dynamics in a quantum trimer” is published in Proceedings of the Royal Society A, a historic journal which has been publishing scientific research since 1905.
Yekaterina “Kate” Shulgina was a first year student in the Graduate School of Arts and Sciences, looking for a short computational biology project so she could check the requirement off her program in systems biology. She wondered how genetic code, once thought to be universal, could evolve and change.
That was 2016 and today Shulgina has come out the other end of that short-term project with a way to decipher this genetic mystery. She describes it in a new paper in the journal eLife with Harvard biologist Sean Eddy.
The report details a new computer program that can read the genome sequence of any organism and then determine its genetic code. The program, called Codetta, has the potential to help scientists expand their understanding of how the genetic code evolves and correctly interpret the genetic code of newly sequenced organisms.
“This in it of itself is a very fundamental biology question,” said Shulgina, who does her graduate research in Eddy’s Lab.
The genetic code is the set of rules that tells the cells how to interpret the three-letter combinations of nucleotides into proteins, often referred to as the building blocks of life. Almost every organism, from E. coli to humans, uses the same genetic code. It’s why the code was once thought to be set in stone. But scientists have discovered a handful of outliers—organisms that use alternative genetic codes—exist where the set of instructions are different.
This is where Codetta can shine. The program can help to identify more organisms that use these alternative genetic codes, helping shed new light on how genetic codes can even change in the first place.
“Understanding how this happened would help us reconcile why we originally thought this was impossible… and how these really fundamental processes actually work,” Shulgina said.
Already, Codetta has analyzed the genome sequences of over 250,000 bacteria and other single-celled organisms called archaea for alternative genetic codes, and has identified five that have never been seen. In all five cases, the code for the amino acid arginine was reassigned to a different amino acid. It’s believed to mark the first-time scientists have seen this swap in bacteria and could hint at evolutionary forces that go into altering the genetic code.
Scientists have discovered a new phase of water — adding to liquid, solid and gas — know as “superionic ice.” The “strange black” ice, as scientists called it, is normally created at the core of planets like Neptune and Uranus.
In a study published in Nature Physics, a team of scientists co-led by Vitali Prakapenka, a University of Chicago research professor, detailed the extreme conditions necessary to produce this kind of ice. It had only been glimpsed once before, when scientists sent a massive shockwave through a droplet of water, creating superionic ice that only existed for an instant.
In this experiment, the research team took a different approach. They pressed water between two diamonds, the hardest material on Earth, to reproduce the intense pressure that exists at the core of planets. Then, they used the Advanced Photon Source, or high-brightness X-ray beams, to shoot a laser through the diamonds to heat the water, according to the study.
“Imagine a cube, a lattice with oxygen atoms at the corners connected by hydrogen when it transforms into this new superionic phase, the lattice expands, allowing the hydrogen atoms to migrate around while the oxygen atoms remain steady in their positions,” Prakapenka said in a press release. “It’s kind of like a solid oxygen lattice sitting in an ocean of floating hydrogen atoms.”
Using an X-ray to look at the results, the team found the ice became less dense and was described as black in color because it interacted differently with light.
“It’s a new state of matter, so it basically acts as a new material, and it may be different from what we thought,” Prakapenka said.
What surprised the scientists the most was that superionic ice was created under a much lighter pressure than they’d originally speculated. They had thought that it would not be created until the water was compressed to over 50 gigapascals of pressure — the same amount of pressure inside rocket fuel as it combusts for lift-off — but it only took 20 gigapascals of pressure.
[…]
Superionic ice doesn’t exist only inside far-away planets — it’s also inside Earth, and it plays a role in maintaining our planet’s magnetic fields. Earth’s intense magnetism protects the planet’s surface from dangerous radiation and cosmic rays that come from outer space.
No one knows why some people age worse than others and develop diseases -such as Alzheimer’s, fibrosis, type 2 diabetes or some types of cancer- associated with this aging process. One explanation for this could be the degree of efficiency of each organism’s response to the damage sustained by its cells during its life, which eventually causes them to age. In relation to this, researchers at the Universitat Oberta de Catalunya (UOC) and the University of Leicester (United Kingdom) have developed a new method to remove old cells from tissues, thus slowing down the aging process.
Specifically, they have designed an antibody that acts as a smart bomb able to recognize specific proteins on the surface of these aged or senescent cells. It then attaches itself to them and releases a drug that removes them without affecting the rest, thus minimizing any potential side effects.
[…]
“We now have, for the first time, an antibody-based drug that can be used to help slow down cellular senescence in humans,” noted Salvador Macip, the leader of this research and a doctor and researcher at the UOC and the University of Leicester.
“We based this work on existing cancer therapies that target specific proteins present on the surface of cancer cells, and then applied them to senescent cells,” explained the expert.
All living organisms have a mechanism known as “cellular senescence” that halts the division of damaged cells and removes them to stop them from reproducing. This mechanism helps slow down the progress of cancer, for example, as well as helping model tissue at the embryo development stage.
However, in spite of being a very beneficial biological mechanism, it contributes to the development of diseases when the organism reaches old age. This seems to be because the immune system is no longer able to efficiently remove these senescent cells, which gradually accumulate in tissues and detrimentally affect their functioning.
[…]
The drug designed by Macip and his team is a second-generation senolytic with high specificity and remote-controlled delivery. They started from the results of a previous study that looked at the “surfaceome,” the proteins on the cell’s surface, to identify those proteins that are only present in senescent cells. “They’re not universal: some are more present than others on each type of aged cell,” said Macip.
In this new work, the researchers used a monoclonal antibody trained to recognize senescent cells and attach to them. “Just like our antibodies recognize germs and protect us from them, we’ve designed these antibodies to recognize old cells. In addition, we’ve given them a toxic load to destroy them, as if they were a remote-controlled missile,” said the researcher, who is the head of the University of Leicester’s Mechanisms of Cancer and Aging Lab.
Treatment could start to be given as soon as the first symptoms of the disease, such as Alzheimer’s, type 2 diabetes, Parkinson’s, arthritis, cataracts or some tumors, appear. In the long term, the researchers believe that it could even be used to achieve healthier aging in some circumstances.
Daily exposure to phthalates, a group of chemicals used in everything from plastic containers to makeup, may lead to approximately 100,000 deaths in older Americans annually, a study from New York University warned Tuesday.
The chemicals, which can be found in hundreds of products such as toys, clothing and shampoo, have been known for decades to be “hormone disruptors,” affecting a person’s endocrine system.
The toxins can enter the body through such items and are linked to obesity, diabetes and heart disease, said the study published in the journal Environmental Pollution.
The research, which was carried out by New York University’s Grossman School of Medicine and includes some 5,000 adults aged 55 to 64, shows that those with higher concentrations of phthalates in their urine were more likely to die of heart disease.
The fight against malaria, one of the world’s worst diseases for decades, is likely to get much easier as the World Health Organization has endorsed the wide use of a malaria vaccine developed by GlaxoSmithKline, the first ever to win such approval. The vaccine will be recommended for children in sub-Saharan Africa and other high-risk areas as a four-dose schedule starting at age 5 months.
[…]
“This is a historic moment. The long-awaited malaria vaccine for children is a breakthrough for science, child health and malaria control,” said WHO Director-General Tedros Adhanom Ghebreyesus in a statement announcing their endorsement of the vaccine. “Using this vaccine on top of existing tools to prevent malaria could save tens of thousands of young lives each year.”
Despite the good news, GlaxoSmithKline’s vaccine, which is currently code-named RTS,S/AS01 but will be branded as Mosquirix, is only modestly effective. In the clinical trials evaluated for WHO approval, it was found to prevent around half of severe cases caused by P. falciparum malaria, compared to the control group. But this level of efficacy was only seen in the first year of vaccination, and by the fourth year, protection had waned to very low levels. At roughly 55% efficacy, the vaccine meets the bare minimum for WHO endorsement.
A major study this year did find that a combination of the vaccine and anti-malarial drugs can further reduce the risk of severe disease and death by 70%, a much more appealing target for public health programs. But even as is, one study has projected that the vaccine would prevent millions of cases and over 20,000 deaths annually in sub-Saharan Africa if deployed widely.
Like other vaccines before it, Mosquirix may also represent the first step toward more effective vaccines in the future. There are several other candidates in development already, including one from Moderna that’s relying on the same mRNA platform as the company’s successful covid-19 vaccine.
Carlene Knight’s vision was so bad that she couldn’t even maneuver around the call center where she works using her cane. But that’s changed as a result of volunteering for a landmark medical experiment. Her vision has improved enough for her to make out doorways, navigate hallways, spot objects and even see colors. Knight is one of seven patients with a rare eye disease who volunteered to let doctors modify their DNA by injecting the revolutionary gene-editing tool CRISPR directly into cells that are still in their bodies. Knight and [another volunteer in the experiment, Michael Kalberer] gave NPR exclusive interviews about their experience. This is the first time researchers worked with CRISPR this way. Earlier experiments had removed cells from patients’ bodies, edited them in the lab and then infused the modified cells back into the patients. […]
CRISPR is already showing promise for treating devastating blood disorders such as sickle cell disease and beta thalassemia. And doctors are trying to use it to treat cancer. But those experiments involve taking cells out of the body, editing them in the lab, and then infusing them back into patients. That’s impossible for diseases like [Leber congenital amaurosis, or LCA], because cells from the retina can’t be removed and then put back into the eye. So doctors genetically modified a harmless virus to ferry the CRISPR gene editor and infused billions of the modified viruses into the retinas of Knight’s left eye and Kalberer’s right eye, as well as one eye of five other patients. The procedure was done on only one eye just in case something went wrong. The doctors hope to treat the patients’ other eye after the research is complete. Once the CRISPR was inside the cells of the retinas, the hope was that it would cut out the genetic mutation causing the disease, restoring vision by reactivating the dormant cells.
The procedure didn’t work for all of the patients, who have been followed for between three and nine months. The reasons it didn’t work might have been because their dose was too low or perhaps because their vision was too damaged. But Kalberer, who got the lowest dose, and one volunteer who got a higher dose, began reporting improvement starting at about four to six weeks after the procedure. Knight and one other patient who received a higher dose improved enough to show improvement on a battery of tests that included navigating a maze. For two others, it’s too soon to tell. None of the patients have regained normal vision — far from it. But the improvements are already making a difference to patients, the researchers say. And no significant side effects have occurred. Many more patients will have to be treated and followed for much longer to make sure the treatment is safe and know just how much this might be helping.
A relatively new type of computing that mimics the way the human brain works was already transforming how scientists could tackle some of the most difficult information processing problems.
Now, researchers have found a way to make what is called reservoir computing work between 33 and a million times faster, with significantly fewer computing resources and less data input needed.
In fact, in one test of this next-generation reservoir computing, researchers solved a complex computing problem in less than a second on a desktop computer.
Using the now current state-of-the-art technology, the same problem requires a supercomputer to solve and still takes much longer, said Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University.
[…]
Reservoir computing is a machine learning algorithm developed in the early 2000s and used to solve the “hardest of the hard” computing problems, such as forecasting the evolution of dynamical systems that change over time, Gauthier said.
Dynamical systems, like the weather, are difficult to predict because just one small change in one condition can have massive effects down the line, he said.
One famous example is the “butterfly effect,” in which—in one metaphorical illustration—changes created by a butterfly flapping its wings can eventually influence the weather weeks later.
Previous research has shown that reservoir computing is well-suited for learning dynamical systems and can provide accurate forecasts about how they will behave in the future, Gauthier said.
It does that through the use of an artificial neural network, somewhat like a human brain. Scientists feed data on a dynamical network into a “reservoir” of randomly connected artificial neurons in a network. The network produces useful output that the scientists can interpret and feed back into the network, building a more and more accurate forecast of how the system will evolve in the future.
The larger and more complex the system and the more accurate that the scientists want the forecast to be, the bigger the network of artificial neurons has to be and the more computing resources and time that are needed to complete the task.
One issue has been that the reservoir of artificial neurons is a “black box,” Gauthier said, and scientists have not known exactly what goes on inside of it—they only know it works.
The artificial neural networks at the heart of reservoir computing are built on mathematics, Gauthier explained.
“We had mathematicians look at these networks and ask, ‘To what extent are all these pieces in the machinery really needed?'” he said.
In this study, Gauthier and his colleagues investigated that question and found that the whole reservoir computing system could be greatly simplified, dramatically reducing the need for computing resources and saving significant time.
They tested their concept on a forecasting task involving a weather system developed by Edward Lorenz, whose work led to our understanding of the butterfly effect.
Their next-generation reservoir computing was a clear winner over today’s state—of-the-art on this Lorenz forecasting task. In one relatively simple simulation done on a desktop computer, the new system was 33 to 163 times faster than the current model.
But when the aim was for great accuracy in the forecast, the next-generation reservoir computing was about 1 million times faster. And the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons, compared to the 4,000 needed by the current-generation model, Gauthier said.
An important reason for the speed-up is that the “brain” behind this next generation of reservoir computing needs a lot less warmup and training compared to the current generation to produce the same results.
Warmup is training data that needs to be added as input into the reservoir computer to prepare it for its actual task.
[…]
“Currently, scientists have to put in 1,000 or 10,000 data points or more to warm it up. And that’s all data that is lost, that is not needed for the actual work. We only have to put in one or two or three data points,” he said.
[…]
In their test of the Lorenz forecasting task, the researchers could get the same results using 400 data points as the current generation produced using 5,000 data points or more, depending on the accuracy desired.
When two substances are brought together, they will eventually settle into a steady state called thermodynamic equilibrium; examples include oil floating on top of water and milk mixing uniformly into coffee. Researchers at Aalto University in Finland wanted to disrupt this sort of state to see what happens—and whether they can control the outcome.
[…]
In their work, the team used combinations of oils with different dielectric constants and conductivities. They then subjected the liquids to an electric field.
“When we turn on an electric field over the mixture, electrical charge accumulates at the interface between the oils. This charge density shears the interface out of thermodynamic equilibrium and into interesting formations,” explains Dr. Nikos Kyriakopoulos, one of the authors of the paper. As well as being disrupted by the electric field, the liquids were confined into a thin, nearly two-dimensional sheet. This combination led to the oils reshaping into various completely unexpected droplets and patterns.
The droplets in the experiment could be made into squares and hexagons with straight sides, which is almost impossible in nature, where small bubbles and droplets tend to form spheres. The two liquids could be also made to form into interconnected lattices: grid patterns that occur regularly in solid materials but are unheard of in liquid mixtures. The liquids can even be coaxed into forming a torus, a donut shape, which was stable and held its shape while the field was applied—unlike in nature, as liquids have a strong tendency to collapse in and fill the hole at the center. The liquids can also form filaments that roll and rotate around an axis.
[…]
The research was carried out at the Department of Applied Physics in the Active Matter research group, led by Professor Timonen. The paper “Diversity of non-equilibrium patterns and emergence of activity in confined electrohydrodynamically driven liquids” is published open-access in Science Advances.
More information: Diversity of non-equilibrium patterns and emergence of activity in confined electrohydrodynamically driven liquids, Science Advances (2021). DOI: 10.1126/sciadv.abh1642
An enormous randomized trial of communities in Bangladesh seems to provide the clearest evidence yet that regular mask-wearing can impede the spread of the covid-19 pandemic. The study found that villages where masks were highly promoted and became more popular experienced noticeably lower rates of covid-like symptoms and confirmed past infections than villages where mask-wearing remained low. These improvements were even more pronounced for villages given free surgical masks over cloth masks.
Plenty of data has emerged over the last year and a half to support the use of masks during the covid-19 pandemic, both in the real world and in the lab. But it’s less clear exactly how much of a benefit these masks can provide wearers (and their communities), and there are at least some studies that have been inconclusive in showing a noticeable benefit.
[…]
Last late year, however, dozens of scientists teamed up with public health advocacy organizations and the Bangladesh government to conduct a massive randomized trial of masks—often seen as the gold standard of evidence. And on Wednesday, they released the results of their research in a working paper through the research nonprofit Innovations for Poverty Action.
The study involved 600 villages in a single region of the country with over 350,000 adult residents combined. Similarly matched villages were randomly assigned to two conditions (a pair of villages with similar population density, for instance, would go to one condition or the other). In one condition, the researchers and their partners promoted the use of masks through various incentives between November 2020 and January 2021. These incentives included free masks, endorsements by local leaders, and sometimes financial prizes for villages that achieved widespread mask usage. In two-thirds of the intervention villages, the free masks given were surgical, while one-third were given free cloth masks. In the second condition, the researchers simply observed the villages and did nothing to encourage masks during that time.
Residents in the villages where masks were encouraged did start wearing them more, though no individual nudge or incentive seemed to do better than the others. By the end, about 42% of residents in these villages wore masks regularly, compared to 13% of those in the control group. And in these communities, the odds of people reporting symptoms that may have been covid or testing positive for antibodies to the virus declined.
Overall, the average proportion of people who reported symptoms in the weeks following the mask promotions went down by 11% in these villages compared to the control group, and the average number of people having antibodies went down by over 9%. These differences were larger for surgical mask-wearing villages (12% vs 5% for reducing symptoms) and for residents over 60 (35% for reducing infections for older residents in surgical mask-wearing villages).
Some of this effect might not have come directly from the ability of masks to block transmission of the virus. Those who used masks, the study found, were also more likely to practice social distancing. That’s a relevant finding, the authors note, since some people who have argued against mask mandates do so by claiming that masks will only make people act more carelessly. This study suggests that the opposite is true—that masks make us more, not less, conscientious of others.
Reddit has finally cracked down on COVID-19 misinformation following growing calls to act, although it probably won’t satisfy many of its critics. The social site has banned r/NoNewNormal and quarantined 54 other COVID-19 denial subreddits, but not over the false claims themselves. Instead, it’s for abuse — NoNewNormal was caught brigading en masse (that is, flooding other subreddits) despite warnings, while the other communities violated a rule forbidding harassment and bullying.
The company didn’t, however, relent on its approach to tackling the misinformation itself. Reddit said it clamps down on posts that encourage a “significant risk of physical harm” or are manipulations intended to mislead others, but made no mention of purging posts or subreddits merely for making demonstrably false claims about COVID-19 or vaccines.
Reddit previously defended its position by arguing its platform was meant to foster “open and authentic” conversations, even if they disagree with a widely established consensus. However, that stance hasn’t satisfied many of Reddit’s users. Business Insidernoted 135 subreddits went “dark” (that is, went private) in protest over Reddit’s seeming tolerance of COVID-19 misinformation, including major communities like r/TIFU.
Critics among those groups contended that Reddit let these groups blossom through “inaction and malice,” and that Reddit wasn’t consistent in enforcing its own policies on misinformation and abuse. As one redditor pointed out, Reddit’s claims about allowing dissenting ideas don’t carry much weight — the COVID-19 denial groups are presenting false statements, not just contrary opinions.