[…] The invention, by a University of Bristol physicist, who gave it the name “counterportation,” provides the first-ever practical blueprint for creating in the lab a wormhole that verifiably bridges space, as a probe into the inner workings of the universe.
By deploying a novel computing scheme, revealed in the journal Quantum Science and Technology, which harnesses the basic laws of physics, a small object can be reconstituted across space without any particles crossing. Among other things, it provides a “smoking gun” for the existence of a physical reality underpinning our most accurate description of the world.
[…]
Hatim said, “Here’s the sharp distinction. While counterportation achieves the end goal of teleportation, namely disembodied transport, it remarkably does so without any detectable information carriers traveling across.”
[…]
“If counterportation is to be realized, an entirely new type of quantum computer has to be built: an exchange-free one, where communicating parties exchange no particles,” Hatim said.
[…]
“The goal in the near future is to physically build such a wormwhole in the lab, which can then be used as a testbed for rival physical theories, even ones of quantum gravity,” Hatim added.
[…]
John Rarity, professor of optical communication systems at the University of Bristol, said, “We experience a classical world which is actually built from quantum objects. The proposed experiment can reveal this underlying quantum nature showing that entirely separate quantum particles can be correlated without ever interacting. This correlation at a distance can then be used to transport quantum information (qbits) from one location to another without a particle having to traverse the space, creating what could be called a traversable wormhole.”
More information: Hatim Salih, From counterportation to local wormholes, Quantum Science and Technology (2022). DOI: 10.1088/2058-9565/ac8ecd
[…] “The image was captured on a summer evening in São José dos Campos [in São Paulo state] while a negatively charged lightning bolt was nearing the ground at 370 km per second. When it was a few dozen meters from ground level, lightning rods and tall objects on the tops of nearby buildings produced positive upward discharges, competing to connect to the downward strike. The final image prior to the connection was obtained 25 thousandths of a second before the lightning hit one of the buildings,” Saba said.
He used a camera that takes 40,000 frames per second. When the video is played back in slow motion, it shows how lightning discharges behave and also how dangerous they can be if the protection system is not properly installed: Although there are more than 30 lightning rods in the vicinity, the strike connected not to them but to a smokestack on top of one of the buildings. “A flaw in the installation left the area unprotected. The impact of a 30,000-amp discharge did enormous damage,” he said.
[…]
Lightning strikes branch out as the electrical charges seek the path of least resistance, rather than the shortest path, which would be a straight line. The path of least resistance, usually a zigzag, is determined by different electrical characteristics of the atmosphere, which is not homogeneous. “A lightning strike made up of several discharges can last up to 2 seconds. However, each discharge lasts only fractions of milliseconds,” Saba said.
Lightning rods neither attract nor repel strikes, he added. Nor do they “discharge” clouds, as used to be believed. They simply offer lightning an easy and safe route to the ground.
Because it is not always possible to rely on the protection of a lightning rod, and most atmospheric discharges occur in summer in the tropics, it is worth considering Saba’s advice. “Storms are more frequent in the afternoon than in the morning, so be careful about outdoor activities on summer afternoons. Find shelter if you hear thunder, but never under a tree or pole, and never under a rickety roof,” he said.
“If you can’t find a safe place to shelter, stay in the car and wait for the storm to blow over. If no car or other shelter is available, squat down with your feet together. Don’t stand upright or lie flat. Indoors, avoid contact with appliances and fixed-line telephones.”
It is possible to survive being struck by lightning, and there are many examples. The odds increase if the person receives care quickly. “Cardiac arrest is the only cause of death. In this case, cardiopulmonary resuscitation is the recommended treatment,” Saba said.
Saba began systematically studying lightning with high-speed cameras in 2003, ever since building a collection of videos of lightning filmed at high speed that has become the world’s largest.
More information: Marcelo M. F. Saba et al, Close View of the Lightning Attachment Process Unveils the Streamer Zone Fine Structure, Geophysical Research Letters (2022). DOI: 10.1029/2022GL101482
[…] An interdisciplinary team of scientists have released a complete reconstruction and analysis of a larval fruit fly’s brain, published Thursday in the journal Science. The resulting map, or connectome, as its called in neuroscience, includes each one of the 3,016 neurons and 548,000 of the synapses running between neurons that make up the baby fly’s entire central nervous system. The connectome includes both of the larva’s brain lobes, as well as the nerve cord.
[…]
It is the most complete insect brain map ever constructed and the most intricate entire connectome of any animal ever published. In short: it’s a game changer.
For some, it represents a paradigm shift in the field of neuroscience. Fruit flies are model organisms, and many neural structures and pathways are thought to be conserved across evolution. What’s true for maggots might well be true for other insects, mice, or even humans.
[…]
this new connectome is like going from a blurry satellite view to a crisp city street map. On the block-by-block array of an insect’s cortex, “now we know where every 7-11 and every, you know, Target [store] is,” Mosca said.
To complete the connectome, a group of Cambridge University scientists spent 12 years focusing in on the brain of a single, 6-hour-old female fruit fly larva. The organ, approximately 170 by 160 by 70 micrometers in size, is truly teeny—within an order of magnitude of things too small to see with a naked eye. Yet, the researchers were able to use electron microscopy to visually cut it into thousands of slices, only nanometers thick. Imaging just one of the neurons took a day, on average. From there, once the physical map of the neurons, or “brain volume,” was complete, the analysis began.
Along with computer scientists at Johns Hopkins University, the Cambridge neuroscientists assessed and categorized the neurons and synapses they’d found. The JHU researchers fine-tuned a computer program for this exact application in order to determine cell and synapse types, patterns within the brain connections, and to chart some function onto the larva connectome—based on previous neuroscience studies of behavior and sensory systems.
They found many surprises. For one, the larval fly connectome showed numerous neural pathways that zigzagged between hemispheres, demonstrating just how integrated both sides of the brain are and how nuanced signal processing can be, said Michael Winding, one of the study’s lead researchers and a Cambridge University neuroscientist, in a video call. “I never thought anything would look like that,” Winding said.
[…]
Fascinatingly, these recurrent structures mapped from an actual brain appear to closely match the architecture of some artificial intelligence models (called residual neural networks), with nested pathways allowing for different levels of complexity, Zlatic noted.
[…]
Not only was the revealed neural structure layered, but the neurons themselves appear to be multi-faceted. Sensory cells connected across modalities—visual, smell, and other inputs crossed and interacted en route to output cells, explained Zlatic. “This brain does a huge amount of multi-sensory integration…which is computationally a very powerful thing,” she added.
Then there were the types and relative quantities of cell-to-cell connections. In neuroscience, the classic “canonical” type of synapse runs from an axon to a dendrite. Yet, within the mapped larval fly brain, that only accounted for about two-thirds of the connections, Winding and Zlatic said. Axons linked to axons, dendrites to dendrites, and dendrites to axons. Scientists have known these sorts of connections exist in animal nervous systems, but the scope went far beyond what they expected. “Given the breadth of these connections, they must be important for brain computation,” Winding noted. We just don’t know exactly how.
Drop a flat piece of paper and it will flutter and tumble through the air as it falls, but a well-fashioned paper airplane will glide smoothly. Although these structures look simple, their aerodynamics are surprisingly complex. Researchers at New York University’s Courant Institute of Mathematical Sciences conducted a series of experiments involving paper airplanes to explore this transition and develop a mathematical model to predict flight stability, according to a March paper published in the Journal of Fluid Mechanics.
“The study started with simple curiosity about what makes a good paper airplane and specifically what is needed for smooth gliding,” said co-author Leif Ristroph. “Answering such basic questions ended up being far from child’s play. We discovered that the aerodynamics of how paper airplanes keep level flight is really very different from the stability of conventional airplanes.”
Nobody knows who invented the first paper airplane, but China began making paper on a large scale around 500 BCE, with the emergence of origami and paper-folding as a popular art form between 460 and 390 BCE. Paper airplanes have long been studied as a means of learning more about the aerodynamics of flight. For instance, Leonardo da Vinci famously built a model plane out of parchment while dreaming up flying machines and used paper models to test his design for an ornithopter. In the 19th century, British engineer and inventor Sir George Cayley—sometimes called the “father of aviation”—studied the gliding performance of paper airplanes to design a glider capable of carrying a human.
An amusing “scientist playing with paper planes” anecdote comes from physicist Theodore von Kármán. In his 1967 memoir The Wind and Beyond, he recalled a formal 1924 banquet in Delft, The Netherlands, where fellow physicist Ludwig Prandtl constructed a paper airplane out of a menu to demonstrate the mechanics of flight to von Kármán’s sister, who was seated next to him. When he threw the paper plane, “It landed on the shirtfront of the French minister of education, much to the embarrassment of my sister and others at the banquet,” von Kármán wrote.
Enlarge/ Flight motions of paper airplanes with different center of mass locations.
NYU Applied Mathematics Laboratory
While scientists have clearly made great strides in aerodynamics—particularly about aircraft—Ristroph et al. noted that there was not a good mathematical model for predicting the simpler, subtler gliding flight of paper airplanes. It was already well-known that displacing the center of mass results in various flight trajectories, some more stable than others. “The key criterion of a successful glider is that the center of mass must be in the ‘just right’ place,” said Ristroph. “Good paper airplanes achieve this with the front edge folded over several times or by an added paper clip, which requires a little trial and error.”
He and his team verified this by test-flying various rectangular sheets of paper, changing the front weight by adding thin metallic tape to one edge. They found that an unweighted sheet tumbled end over end while descending left to right under the force of gravity. Adding a small weight to shift the center of mass slightly forward also produced a tumbling trajectory. Overall, they found that flyers with greater front-loading produced erratic trajectories full of swoops, climbs, flips, and dives.
The next step was to conduct more controlled and systematic experiments. Ristroph et al. decided to work with thin plastic plates “flying” through a large glass tank of water. The plates were laser-cut from an acrylic plastic sheet, along with two smaller “fins” embedded with lead weights to displace the center of mass, and they also serve as aerodynamic stabilizers. There were 17 plastic plates, each with a different center of mass. Each was released into the tank by sliding it down a short ramp, and the team recorded its free-flight motion through the water.
Enlarge/ Trajectories of plates falling through water, where the different colors represent different degrees of front weighting. Only the “just right” weight distribution leads to the smooth gliding shown in blue.
NYU Applied Mathematics Laboratory
They found the same dynamics played out. If the weight was centered, or nearly so, at the center of the wing, the plate would flutter and tumble erratically. Displace the center of mass too far toward one edge, and the plate would rapidly nosedive and crash. The proverbial “sweet spot” was placing the weight between those extremes. In that case, the aerodynamic force on the plane’s wing will push the wing back down if it moves upward, and push the wing back up if it moves downward. In other words, the center of pressure will vary with the angle of flight, thereby ensuring stability.
This differs substantially from conventional aircraft, which rely on airfoils—structures designed to generate lift. “The effect we found in paper airplanes does not happen for the traditional airfoils used as aircraft wings, whose center of pressure stays fixed in place across the angles that occur in flight,” said Ristroph. “The shifting of the center of pressure thus seems to be a unique property of thin, flat wings, and this ends up being the secret to the stable flight of paper airplanes. This is why airplanes need a separate tail wing as a stabilizer while a paper plane can get away with just a main wing that gives both lift and stability.”
The team also developed a mathematical model as a “flight simulator” to reproduce those motions. Ristroph et al. think their findings will prove useful in small-scale flight applications like drones or flying robots, which often require a more minimal design with no need for many extra flight surfaces, sensors, and controllers. The authors also note that the same strategy might be at work in winged plant seeds, some of which also exhibit stable gliding, with the seed serving as the payload to displace the center of mass. In fact, a 1987 study of the flying seeds of the gourd Alsomitra macrocarpa showed a center of mass and glide ratios consistent with the Ristroph group’s optimal gliding requirements.
Geology textbooks almost inevitably include a cutaway diagram of the Earth showing four neatly delineated layers: a thin outer shell of rock that we live on known as the crust; the mantle, where rocks flow like an extremely viscous liquid, driving the movement of continents and the lifting of mountains; a liquid outer core of iron and nickel that generates the planet’s magnetic field; and a solid inner core. Analyzing the crisscrossing of seismic waves from large earthquakes, two Australian scientists say there is a distinctly different layer at the very center of the Earth. “We have now confirmed the existence of the innermost inner core,” said one of the scientists, Hrvoje Tkalcic, a professor of geophysics at the Australian National University in Canberra.
Dr. Tkalcic and Thanh-Son Pham, a postdoctoral researcher, estimate that the innermost inner core is about 800 miles wide; the entire inner core is about 1,500 miles wide. Their findings were published on Tuesday in the journal Nature Communications. While the cutaway diagram appears to depict clear-cut divisions, knowledge about the deep interior of Earth is unavoidably fuzzy. It is nearly 4,000 miles to the center of Earth, and it is impossible to drill more than a few miles into the crust. Most of what is known about what lies beneath comes from seismic waves — the vibrations of earthquakes traveling through and around the planet. Think of them as a giant sonogram of Earth.
Two Harvard seismologists, Miaki Ishii and Adam Dziewonski, first proposed the idea of the innermost inner core in 2002 based on peculiarities in the speed of seismic waves passing through the inner core. Scientists already knew that the speed of seismic waves traveling through this part of the Earth varied depending on the direction. The waves traveled fastest when going from pole to pole along the Earth’s axis and slowest when traveling perpendicular to the axis. The difference in speeds — a few percent faster along polar paths — arises from the alignment of iron crystals in the inner core, geophysicists believe. But in a small region at the center, the slowest waves were those traveling at a 45-degree angle to the axis instead of 90 degrees, the Harvard seismologists said. The data available then were too sparse to convince everyone.
Think of bringing a pot of water to the boil: As the temperature reaches the boiling point, bubbles form in the water, burst and evaporate as the water boils. This continues until there is no more water changing phase from liquid to steam.
This is roughly the idea of what happened in the very early universe, right after the Big Bang, 13.7 billion years ago.
The idea comes from particle physicists Martin S. Sloth from the Center for Cosmology and Particle Physics Phenomenology at University of Southern Denmark and Florian Niedermann from the Nordic Institute for Theoretical Physics (NORDITA) in Stockholm. Niedermann is a previous postdoc in Sloth’s research group. In this new scientific article, they present an even stronger basis for their idea.
Many bubbles crashing into each other
“One must imagine that bubbles arose in various places in the early universe. They got bigger and they started crashing into each other. In the end, there was a complicated state of colliding bubbles, which released energy and eventually evaporated,” said Martin S. Sloth.
The background for their theory of phase changes in a bubbling universe is a highly interesting problem with calculating the so-called Hubble constant; a value for how fast the universe is expanding. Sloth and Niedermann believe that the bubbling universe plays a role here.
The Hubble constant can be calculated very reliably by, for example, analyzing cosmic background radiation or by measuring how fast a galaxy or an exploding star is moving away from us. According to Sloth and Niedermann, both methods are not only reliable, but also scientifically recognized. The problem is that the two methods do not lead to the same Hubble constant. Physicists call this problem “the Hubble tension.”
Is there something wrong with our picture of the early universe?
“In science, you have to be able to reach the same result by using different methods, so here we have a problem. Why don’t we get the same result when we are so confident about both methods?” said Florian Niedermann.
Sloth and Niedermann believe they have found a way to get the same Hubble constant, regardless of which method is used. The path starts with a phase transition and a bubbling universe—and thus an early, bubbling universe is connected to “the Hubble tension.” “If we assume that these methods are reliable—and we think they are—then maybe the methods are not the problem. Maybe we need to look at the starting point, the basis, that we apply the methods to. Maybe this basis is wrong.”
AI generated illustration of colliding bubbles in the universe. Credit: Birgitte Svennevig, University of Southern Denmark
An unknown dark energy
The basis for the methods is the so-called Standard Model, which assumes that there was a lot of radiation and matter, both normal and dark, in the early universe, and that these were the dominant forms of energy. The radiation and the normal matter were compressed in a dark, hot and dense plasma; the state of the universe in the first 380.000 years after Big Bang.
When you base your calculations on the Standard Model, you arrive at different results for how fast the universe is expanding—and thus different Hubble constants.
But maybe a new form of dark energy was at play in the early universe? Sloth and Niedermann think so.
If you introduce the idea that a new form of dark energy in the early universe suddenly began to bubble and undergo a phase transition, the calculations agree. In their model, Sloth and Niedermann arrive at the same Hubble constant when using both measurement methods. They call this idea New Early Dark Energy—NEDE.
Change from one phase to another—like water to steam
Sloth and Niedermann believe that this new, dark energy underwent a phase transition when the universe expanded, shortly before it changed from the dense and hot plasma state to the universe we know today.
“This means that the dark energy in the early universe underwent a phase transition, just as water can change phase between frozen, liquid and steam. In the process, the energy bubbles eventually collided with other bubbles and along the way released energy,” said Niedermann.
“It could have lasted anything from an insanely short time—perhaps just the time it takes two particles to collide—to 300,000 years. We don’t know, but that is something we are working to find out,” added Sloth.
Do we need new physics?
So, the phase transition model is based on the fact that the universe does not behave as the Standard Model tells us. It may sound a little scientifically crazy to suggest that something is wrong with our fundamental understanding of the universe; that you can just propose the existence of hitherto unknown forces or particles to solve the Hubble tension.
“But if we trust the observations and calculations, we must accept that our current model of the universe cannot explain the data, and then we must improve the model. Not by discarding it and its success so far, but by elaborating on it and making it more detailed so that it can explain the new and better data,” said Martin S. Sloth, adding, “It appears that a phase transition in the dark energy is the missing element in the current Standard Model to explain the differing measurements of the universe’s expansion rate.”
The findings are published in the journal Physics Letters B.
More information: Florian Niedermann et al, Hot new early dark energy: Towards a unified dark sector of neutrinos, dark energy and dark matter, Physics Letters B (2022). DOI: 10.1016/j.physletb.2022.137555
Eight years ago, a patient lost her power of speech because of ALS, or Lou Gehrig’s disease, which causes progressive paralysis. She can still make sounds, but her words have become unintelligible, leaving her reliant on a writing board or iPad to communicate.
Now, after volunteering to receive a brain implant, the woman has been able to rapidly communicate phrases like “I don’t own my home” and “It’s just tough” at a rate approaching normal speech.
That is the claim in a paper published over the weekend on the website bioRxiv by a team at Stanford University. The study has not been formally reviewed by other researchers. The scientists say their volunteer, identified only as “subject T12,” smashed previous records by using the brain-reading implant to communicate at a rate of 62 words a minute, three times the previous best.
[…]
The brain-computer interfaces that Shenoy’s team works with involve a small pad of sharp electrodes embedded in a person’s motor cortex, the brain region most involved in movement. This allows researchers to record activity from a few dozen neurons at once and find patterns that reflect what motions someone is thinking of, even if the person is paralyzed.
In previous work, paralyzed volunteers have been asked to imagine making hand movements. By “decoding” their neural signals in real time, implants have let them steer a cursor around a screen, pick out letters on a virtual keyboard, play video games, or even control a robotic arm.
In the new research, the Stanford team wanted to know if neurons in the motor cortex contained useful information about speech movements, too. That is, could they detect how “subject T12” was trying to move her mouth, tongue, and vocal cords as she attempted to talk?
These are small, subtle movements, and according to Sabes, one big discovery is that just a few neurons contained enough information to let a computer program predict, with good accuracy, what words the patient was trying to say. That information was conveyed by Shenoy’s team to a computer screen, where the patient’s words appeared as they were spoken by the computer.
[…]
Shenoy’s group is part of a consortium called BrainGate that has placed electrodes into the brains of more than a dozen volunteers. They use an implant called the Utah Array, a rigid metal square with about 100 needle-like electrodes.
Some companies, including Elon Musk’s brain interface company, Neuralink, and a startup called Paradromics, say they have developed more modern interfaces that can record from thousands—even tens of thousands—of neurons at once.
While some skeptics have asked whether measuring from more neurons at one time will make any difference, the new report suggests it will, especially if the job is to brain-read complex movements such as speech.
The Stanford scientists found that the more neurons they read from at once, the fewer errors they made in understanding what “T12” was trying to say.
“This is a big deal, because it suggests efforts by companies like Neuralink to put 1,000 electrodes into the brain will make a difference, if the task is sufficiently rich,” says Sabes, who previously worked as a senior scientist at Neuralink.
[…]In a study published Monday in the journal Biosensor and Bioelectronics, a group of researchers from Tel Aviv University (via Neuroscience News) said they recently created a robot that can identify a handful of smells with 10,000 times more sensitivity than some specialized electronics. They describe their robot as a bio-hybrid platform (read: cyborg). It features a set of antennae taken from a desert locust that is connected to an electronic system that measures the amount of electrical signal produced by the antennae when they detect a smell. They paired the robot with an algorithm that learned to characterize the smells by their signal output. In this way, the team created a system that could reliably differentiate between eight “pure” odors, including geranium, lemon and marzipan, and two mixtures of different smells. The scientists say their robot could one day be used to detect drugs and explosives.
A YouTube video from Tel Aviv University claims the robot is a “scientific first,” but last June researchers from Michigan State University published research detailing a system that used surgically-altered locusts to detect cancer cells. Back in 2016, scientists also tried turning locusts into bomb-sniffing cyborgs. What can I say, after millennia of causing crop failures, the pests could finally be useful for something.
The forest carbon offsets approved by the world’s leading provider and used by Disney, Shell, Gucci and other big corporations are largely worthless and could make global heating worse, according to a new investigation.
The research into Verra, the world’s leading carbon standard for the rapidly growing $2bn (£1.6bn) voluntary offsets market, has found that, based on analysis of a significant percentage of the projects, more than 90% of their rainforest offset credits – among the most commonly used by companies – are likely to be “phantom credits” and do not represent genuine carbon reductions.
The analysis raises questions over the credits bought by a number of internationally renowned companies – some of them have labelled their products “carbon neutral”, or have told their consumers they can fly, buy new clothes or eat certain foods without making the climate crisis worse.
But doubts have been raised repeatedly over whether they are really effective.
The nine-month investigation has been undertaken by the Guardian, the German weekly Die Zeit and SourceMaterial, a non-profit investigative journalism organisation. It is based on new analysis of scientific studies of Verra’s rainforest schemes.
[…]
Verra argues that the conclusions reached by the studies are incorrect, and questions their methodology. And they point out that their work since 2009 has allowed billions of dollars to be channelled to the vital work of preserving forests.
The investigation found that:
Only a handful of Verra’s rainforest projects showed evidence of deforestation reductions, according to two studies, with further analysis indicating that 94% of the credits had no benefit to the climate.
The threat to forests had been overstated by about 400% on average for Verra projects, according to analysis of a 2022 University of Cambridge study.
Gucci, Salesforce, BHP, Shell, easyJet, Leon and the band Pearl Jam were among dozens of companies and organisations that have bought rainforest offsets approved by Verra for environmental claims.
Human rights issues are a serious concern in at least one of the offsetting projects. The Guardian visited a flagship project in Peru, and was shown videos that residents said showed their homes being cut down with chainsaws and ropes by park guards and police. They spoke of forced evictions and tensions with park authorities.
[…]
Two different groups of scientists – one internationally based, the other from Cambridge in the UK – looked at a total of about two-thirds of 87 Verra-approved active projects. A number were left out by the researchers when they felt there was not enough information available to fairly assess them.
The two studies from the international group of researchers found just eight out of 29 Verra-approved projects where further analysis was possible showed evidence of meaningful deforestation reductions.
The journalists were able to do further analysis on those projects, comparing the estimates made by the offsetting projects with the results obtained by the scientists. The analysis indicated about 94% of the credits the projects produced should not have been approved.
Credits from 21 projects had no climate benefit, seven had between 98% and 52% fewer than claimed using Verra’s system, and one had 80% more impact, the investigation found.
Separately, the study by the University of Cambridge team of 40 Verra projects found that while a number had stopped some deforestation, the areas were extremely small. Just four projects were responsible for three-quarters of the total forest that was protected.
The journalists again analysed these results more closely and found that, in 32 projects where it was possible to compare Verra’s claims with the study finding, baseline scenarios of forest loss appeared to be overstated by about 400%. Three projects in Madagascar have achieved excellent results and have a significant impact on the figures. If those projects are not included, the average inflation is about 950%.
[…]
Barbara Haya, the director of the Berkeley Carbon Trading Project, has been researching carbon credits for 20 years, hoping to find a way to make the system function. She said: “The implications of this analysis are huge. Companies are using credits to make claims of reducing emissions when most of these credits don’t represent emissions reductions at all.
“Rainforest protection credits are the most common type on the market at the moment. And it’s exploding, so these findings really matter. But these problems are not just limited to this credit type. These problems exist with nearly every kind of credit.
“One strategy to improve the market is to show what the problems are and really force the registries to tighten up their rules so that the market could be trusted. But I’m starting to give up on that. I started studying carbon offsets 20 years ago studying problems with protocols and programs. Here I am, 20 years later having the same conversation. We need an alternative process. The offset market is broken.”
Contrails — the wispy ice clouds trailing behind flying jets — “are surprisingly bad for the environment,” reports CNN: A study that looked at aviation’s contribution to climate change between 2000 and 2018 concluded that contrails create 57% of the sector’s warming impact, significantly more than the CO2 emissions from burning fuel. They do so by trapping heat that would otherwise be released into space.
And yet, the problem may have an apparently straightforward solution. Contrails — short for condensation trails, which form when water vapor condenses into ice crystals around the small particles emitted by jet engines — require cold and humid atmospheric conditions, and don’t always stay around for long. Researchers say that by targeting specific flights that have a high chance of producing contrails, and varying their flight path ever so slightly, much of the damage could be prevented.
Adam Durant, a volcanologist and entrepreneur based in the UK, is aiming to do just that. “We could, in theory, solve this problem for aviation within one or two years,” he says…. Of contrails’ climate impact, “80 or 90% is coming from only maybe five to 10% of all flights,” says Durant. “Simply redirecting a small proportion of flights can actually save the majority of the contrail climate impact….”
In 2021, scientists calculated that addressing the contrail problem would cost under $1 billion a year, but provide benefits worth more than 1,000 times as much. And a study from Imperial College London showed that diverting just 1.7% of flights could reduce the climate damage of contrails by as much as 59%.
Durant’s company Satavia is now testing its technology with two airlines and “actively looking for more airlines in 2023 to work with, as we start scaling up the service that we offer.”
Truly addressing the issue may require some changes to air traffic rules, Durant says — but he’s not the only one working on the issue. There’s also the task force of a non-profit energy think tank that includes six airlines, plus researchers and academics. “We could seriously reduce, say, 50% of the industry’s contrails impact by 2030,” Durant tells CNN. “That’s totally attainable, because we can do it with software and analytics.”
[…] While all scientific results from the LHCb collaboration are already publicly available through open access papers, the data used by the researchers to produce these results is now accessible to anyone in the world through the CERN open data portal. The data release is made in the context of CERN’s Open Science Policy, reflecting the values of transparency and international collaboration enshrined in the CERN Convention for more than 60 years.
[…]
The data sample made available amounts to 20% of the total data set collected by the LHCb experiment in 2011 and 2012 during LHC Run 1. It comprises 200 terabytes containing information obtained from proton–proton collision events filtered and recorded with the detector.
[…]
The analysis of LHC data is a complex and time-consuming exercise. Therefore, to facilitate the analysis, the samples are accompanied by extensive documentation and metadata, as well as a glossary explaining several hundred special terms used in the preprocessing. The data can be analyzed using dedicated LHCb algorithms, which are available as open source software.
Researchers have announced that they simulated two miniscule black holes in a quantum computer and transmitted a message between them through what amounted to a tunnel in space-time.
They said that based on the quantum information teleported, a traversable wormhole appeared to have emerged, but that no rupture of space and time was physically created in the experiment, according to the study published in the journal Nature on Wednesday.
[…]
Caltech physicist Maria Spiropulu, a co-author of the research, described it as having the characteristics of a “baby wormhole”, and now hopes to make “adult wormholes and toddler wormholes step-by-step”. The wormhole dynamics were observed on a quantum device at Google called the Sycamore quantum processor.
Experts who were not involved in the experiment cautioned that it was important to note that a physical wormhole had not actually been created, but noted the future possibilities.
Daniel Harlow, a physicist at MIT, told the New York Times the experiment was based on a modelling that was so simple that it could just as well have been studied using a pencil and paper.
“I’d say that this doesn’t teach us anything about quantum gravity that we didn’t already know,” Harlow wrote. “On the other hand, I think it is exciting as a technical achievement, because if we can’t even do this (and until now we couldn’t), then simulating more interesting quantum gravity theories would certainly be off the table.”
The study authors themselves made clear that scientists remain a long way from being able to send people or other living beings through such a portal.
[…]
“These ideas have been around for a long time and they’re very powerful ideas,” Lykken said. “But in the end, we’re in experimental science, and we’ve been struggling now for a very long time to find a way to explore these ideas in the laboratory. And that’s what’s really exciting about this. It’s not just, ‘Well, wormholes are cool.’ This is a way to actually look at these very fundamental problems of our universe in a laboratory setting.”
Scientists at the U.S. Department of Agriculture’s (USDA) Agricultural Research Service (ARS) recently announced that plants could be used to produce nanobodies that quickly block emerging pathogens in human medicine and agriculture. These nanobodies represent a promising new way to treat viral diseases, including SARS-CoV-2.
Nanobodies are small antibody proteins naturally produced in specific animals like camels, alpacas, and llamas.
ARS researchers turned to evaluating nanobodies to prevent and treat citrus greening disease in citrus trees. These scientists are now using their newly developed and patented SymbiontTM technology to show that nanobodies can be easily produced in a plant system with broad agricultural and public health applications.
As a proof-of-concept, researches showed that nanobodies targeting the SARS-CoV-2 virus could be made in plant cells and remain functional in blocking the binding of the SARS-CoV-2 spike protein to its receptor protein: the process responsible for initiating viral infection in human cells.
“We initially wanted to develop sustainable solutions to pathogens in crop production,” said ARS researcher Robert Shatters, Jr. “The results of that research are indeed successful and beneficial for the nation’s agricultural system. But now we are aware of an even greater result—the benefits of producing therapeutics in plants now justify the consideration of using plants to mass produce COVID-19 protein-based therapies.”
AgroSource, Inc. collaborated with USDA-ARS to develop the plant-based production system. They are currently taking the necessary steps to see how they can move this advancement into the commercial sector.
“This is a huge breakthrough for science and innovative solutions to agricultural and public health challenges,” said ARS researcher Michelle Heck. “This cost-efficient, plant-based system proves that there are alternative ways to confront and prevent the spread of emerging pathogens. The approach has the potential to massively expand livelihood development opportunities in rural agricultural areas of the nation and in other countries.”
The findings are published on the bioRxiv preprint server.
More information: Marco Pitino et al, Plant production of high affinity nanobodies that block SARS-CoV-2 spike protein binding with its receptor, human angiotensin converting enzyme, bioRxiv (2022). DOI: 10.1101/2022.09.03.506425
For the past 50 years, scientists around the world have debated why lightning zig-zags and how it is connected to the thunder cloud above.
There hasn’t been a definitive explanation until now, with a University of South Australia plasma physicist publishing a landmark paper that solves both mysteries.
[…]
The answer? Singlet-delta metastable oxygen molecules.
Basically, lightning happens when electrons hit oxygen molecules with enough energy to create high energy singlet delta oxygen molecules. After colliding with the molecules, the “detached” electrons form a highly conducting step—initially luminous—that redistributes the electric field, causing successive steps.
The conducting column connecting the step to the cloud remains dark when electrons attach to neutral oxygen molecules, followed by immediate detachment of the electrons by singlet delta molecules.
[…]
he paper, “Toward a theory of stepped leaders in lightning” is published in the Journal of Physics D: Applied Physics. It is authored by Dr. John Lowke and Dr. Endre Szili from the Future Industries Institute at the University of South Australia.
More information: John J Lowke et al, Toward a theory of “stepped-leaders” of lightning, Journal of Physics D: Applied Physics (2022). DOI: 10.1088/1361-6463/aca103
UCSF researchers use ISRIB to block the molecular stress response in order to restore cognitive function.
ISRIB, a tiny molecule identified by University of California, San Francisco (UCSF) researchers can repair the neural and cognitive effects of concussion in mice weeks after the damage, according to a new study.
ISRIB blocks the integrated stress response (ISR), a quality control process for protein production that, when activated chronically, can be harmful to cells.
The study, which was recently published in the Proceedings of the National Academy of Sciences, discovered that ISRIB reverses the effects of traumatic brain injury (TBI) on dendritic spines, an area of neurons vital to cognition. The drug-treated mice also showed sustained improvements in working memory.
“Our goal was to see if ISRIB could ameliorate the neural effects of concussion,” said Michael Stryker, Ph.D., a co-senior author of the study and professor of physiology at UCSF. “We were pleased to find the drug was tremendously successful in normalizing neuronal and cognitive function with lasting effects.”
TBI is a leading cause of long-term neurological disability, with patients’ quality of life suffering as a result of difficulties in concentration and memory. It’s also the strongest environmental risk factor for dementia — even a minor concussion boosts an individual’s risk dramatically.
[…]
Using advanced imaging techniques, Frias observed the effects of TBI on dendritic spines, the primary site of excitatory communication between neurons, over the course of multiple days.
In healthy conditions, neurons show a fairly consistent rate of spine formation, maturation, and elimination – dynamics that support learning and memory. But after a single mild concussion, mouse cortical neurons showed a massive burst of newly formed spines and continued to make excessive spines for as long as they were measured.
“Some may find this counterintuitive at first, assuming more dendritic spines would be a good thing for making new memories,” said co-senior author Susanna Rosi, PhD, a professor of physical therapy and neurological surgery at UCSF at the time of the study, now also at Altos Labs. “But in actuality, having all too many new spines is like being in a noisy room – when too many people are talking, you can’t hear the information you need.”
These new spines didn’t stick around for very long, however, and most were removed within days, meaning they hadn’t formed lasting functional synaptic connections.
These aberrant dynamics were rapidly reversed once mice were treated with ISRIB. By blocking the ISR, the drug was able to repair the neuronal structural changes resulting from the brain injury and restore normal rates of spine dynamics. These neuronal structural alterations were also associated with an improvement in performance to normal levels in a behavioral assay of working memory, which persisted for over a month after the final treatment.
“A month in a mouse is several years in a human, so to be able to reverse the effects of concussion in such a lasting way is really exciting,” said Frias.
Dr. Jiang, an assistant professor in the UBC faculty of forestry and the Canada Research Chair in Sustainable Functional Biomaterials, started developing a “biofoam” many years ago both to find new uses for wood waste and reduce pollution from packaging foam.
“Styrofoam waste fills up to 30 percent of global landfills and can take more than 500 years to break down. Our biofoam breaks down in the soil in a couple of weeks, requires little heat and few chemicals to make, and can be used as substitute for packaging foams, packing peanuts and even thermal insulation boards,” says Dr. Jiang.
[…]
“Our Nation was trying to create a new economy out of what was left of our forest after the wildfires and the damage caused by the mountain pine beetle epidemic in the 1990s and early 2000s. The amount of timber available for harvest in the next 20 to 60 years was significantly reduced. I have often asked why, when trees are harvested, up to 50 percent of the tree is left behind to just burn. As a Nation, we were also concerned about the losses in habitat, water quality, decline in moose and salmon populations and the acceleration in climate change,”
[…]
“A unique feature of this project is that the intellectual property is shared between UBC and First Nations,”
[…]An important clinical trial is now underway in the UK. The study is the first to transfuse red blood cells grown in the lab from donated stem cells into humans. Should this research pay off, these blood cells would be incredibly valuable for people with rare blood types, though they wouldn’t replace the need for traditional blood donation.
The RESTORE trial, as it’s known, is being conducted by scientists from the UK’s National Health Services and various universities. At least 10 healthy volunteers are expected to be enrolled in the study. All of them will receive two mini-transfusions, spaced four months apart and in random order, of the lab-grown blood cells and standard cells, both of which are derived from the same donor. As of early Monday, two participants have already gotten the lab-grown blood cells and so far appear to have experienced no side-effects.
The first-of-its-kind experiment is a Phase I trial, meaning that it’s primarily designed to test the safety of a novel or experimental treatment. But the lab-grown cells are theoretically fresher than the mix of newer and older blood cells taken from a typical blood donation (on average, red blood cells live for about 120 days). So the researchers are hoping that the lab-grown cells survive longer than the standard cells in their recipients.
“If our trial, the first such in the world, is successful, it will mean that patients who currently require regular long-term blood transfusions will need fewer transfusions in [the] future, helping transform their care,” said chief researcher Cedric Ghevaert, a hematologist and a professor in transfusion medicine at the University of Cambridge, in a statement released by the NHS.
[…]
Should this project turn out to be a success, lab grown blood cells still won’t replace the donated supply anytime soon. The team’s process is much less efficient than what the human body can do. Currently, for instance, they need about 24 liters of their nutrient solution to filter out one to two tablespoons of red blood cells. Meanwhile, about 45% of our blood is composed of red blood cells.
Even if mass-produced lab-grown blood cells are a far off possibility, they may still be able to help many people in the near future. This technology could one day provide a more reliable and longer-lasting supply of blood cells to people who have a rare mix of blood types or who have developed conditions that make it difficult to receive standard transfusions, such as sickle cell disease.
A new experiment has shown that zapping clouds with electrical charge can alter droplet sizes in fog or, potentially, help a constipated cloud to rain.
Last year Giles Harrison, from the University of Reading, and colleagues from the University of Bath, spent many early mornings chasing fogs in the Somerset Levels, flying uncrewed aircraft into the gloop and releasing charge. Their findings, published in Geophysical Research Letters, showed that when either positive or negative charge was emitted, the fog formed more water droplets.
“Electric charge can slow evaporation, or even – and this is always amazing to me – cause drops to explode because the electric force on them exceeds the surface tension holding them together,” said Harrison.
The findings could be put to good use in dry regions of the world, such as the Middle East and north Africa, as a means of encouraging clouds to release their rain. Cloud droplets are larger than fog droplets and so more likely to collide, and Harrison and his colleagues believe that adding electrical charge to a cloud could help droplets to stick together and become more weighty.
Dr. Bik is a microbiologist who has worked at Stanford University and for the Dutch National Institute for Health who is “blessed” with “what I’m told is a better-than-average ability to spot repeating patterns,” according to their new Op-Ed in the New York Times.
In 2014 they’d spotted the same photo “being used in two different papers to represent results from three entirely different experiments….” Although this was eight years ago, I distinctly recall how angry it made me. This was cheating, pure and simple. By editing an image to produce a desired result, a scientist can manufacture proof for a favored hypothesis, or create a signal out of noise. Scientists must rely on and build on one another’s work. Cheating is a transgression against everything that science should be. If scientific papers contain errors or — much worse — fraudulent data and fabricated imagery, other researchers are likely to waste time and grant money chasing theories based on made-up results…..
But were those duplicated images just an isolated case? With little clue about how big this would get, I began searching for suspicious figures in biomedical journals…. By day I went to my job in a lab at Stanford University, but I was soon spending every evening and most weekends looking for suspicious images. In 2016, I published an analysis of 20,621 peer-reviewed papers, discovering problematic images in no fewer than one in 25. Half of these appeared to have been manipulated deliberately — rotated, flipped, stretched or otherwise photoshopped. With a sense of unease about how much bad science might be in journals, I quit my full-time job in 2019 so that I could devote myself to finding and reporting more cases of scientific fraud.
Using my pattern-matching eyes and lots of caffeine, I have analyzed more than 100,000 papers since 2014 and found apparent image duplication in 4,800 and similar evidence of error, cheating or other ethical problems in an additional 1,700. I’ve reported 2,500 of these to their journals’ editors and — after learning the hard way that journals often do not respond to these cases — posted many of those papers along with 3,500 more to PubPeer, a website where scientific literature is discussed in public….
Unfortunately, many scientific journals and academic institutions are slow to respond to evidence of image manipulation — if they take action at all. So far, my work has resulted in 956 corrections and 923 retractions, but a majority of the papers I have reported to the journals remain unaddressed.
Manipulated images “raise questions about an entire line of research, which means potentially millions of dollars of wasted grant money and years of false hope for patients.” Part of the problem is that despite “peer review” at scientific journals, “peer review is unpaid and undervalued, and the system is based on a trusting, non-adversarial relationship. Peer review is not set up to detect fraud.”
But there’s other problems. Most of my fellow detectives remain anonymous, operating under pseudonyms such as Smut Clyde or Cheshire. Criticizing other scientists’ work is often not well received, and concerns about negative career consequences can prevent scientists from speaking out. Image problems I have reported under my full name have resulted in hateful messages, angry videos on social media sites and two lawsuit threats….
Things could be about to get even worse. Artificial intelligence might help detect duplicated data in research, but it can also be used to generate fake data. It is easy nowadays to produce fabricated photos or videos of events that never happened, and A.I.-generated images might have already started to poison the scientific literature. As A.I. technology develops, it will become significantly harder to distinguish fake from real.
Science needs to get serious about research fraud.
Among their proposed solutions? “Journals should pay the data detectives who find fatal errors or misconduct in published papers, similar to how tech companies pay bounties to computer security experts who find bugs in software.”
Certain star clusters do not seem to be following current understandings of Isaac Newton’s laws of gravity, according to new research published on Wednesday.
The study, published in the Monthly Notices of the Royal Astronomical Society, analyzed open star clusters which are formed when thousands of stars are born in a short time period in a huge gas cloud.
As the stars are born, they blow away the remnants of the gas cloud, causing the cluster to expand and create a loose formation of dozens to thousands of stars held together by weak gravitational forces.
As the clusters dissolve, the stars accumulate on two “tidal tails:” one pulled behind the cluster and the other pushed forward.
“According to Newton’s laws of gravity, it’s a matter of chance in which of the tails a lost star ends up,” explains Dr. Jan Pflamm-Altenburg of the Helmholtz Institute of Radiation and Nuclear Physics at the University of Bonn. “So both tails should contain about the same number of stars. However, in our work we were able to prove for the first time that this is not true: In the clusters we studied, the front tail always contains significantly more stars nearby to the cluster than the rear tail.”
Dr. Tereza Jerabkova, a co-author of the paper, explained that it is very difficult to determine which stars belong to which tail.
“To do this, you have to look at the velocity, direction of motion and age of each of these objects,” said Jerabkova, who managed to develop a method to accurately count the stars for the first time using data from the European Space Agency’s Gaia mission.
The perks of a modified theory
When the researchers looked at the data, they found that it did not fit Newton’s law of gravity and instead fit better with an alternate theory called Modified Newtonian Dynamics (MOND).
“Put simply, according to MOND, stars can leave a cluster through two different doors,” explained Prof. Dr. Pavel Kroupa of the Helmholtz Institute of Radiation and Nuclear Physics. “One leads to the rear tidal tail, the other to the front. However, the first is much narrower than the second – so it’s less likely that a star will leave the cluster through it. Newton’s theory of gravity, on the other hand, predicts that both doors should be the same width.”
The researchers simulated the stellar distribution expected according to the MOND theory and found that it lined up well with what they observed in the data from the Gaia mission.
Dr. Ingo Thies, who played a key role in the simulations, explained that the researchers needed to rely on relatively simple computational methods in the study since currently there are no mathematical tools for more detailed analyses of the MOND theory.
The simulations also coincided with the Gaia data in terms of how long the star clusters typically survive, which is much shorter than would be expected according to Newton’s laws.
“This explains a mystery that has been known for a long time,” said Kroupa. “Namely, star clusters in nearby galaxies seem to be disappearing faster than they should.”
The MOND theory is controversial as modifications to Newton’s laws of gravity would have far-reaching consequences for other areas of physics as well, although they would solve many problems facing cosmology.
The finding, which researchers made by measuring the electrical fields around honeybee (apis mellifera) hives, reveals that bees can produce as much atmospheric electricity as a thunderstorm. This can play an important role in steering dust to shape unpredictable weather patterns; and their impact may even need to be included in future climate models.
Insects’ tiny bodies can pick up positive charge while they forage — either from the friction of air molecules against their rapidly beating wings (honeybees can flap their wings more than 230 times a second) or from landing onto electrically charged surfaces. But the effects of these tiny charges were previously assumed to be on a small scale. Now, a new study, published Oct. 24 in the journal iScience, shows that insects can generate a shocking amount of electricity.
[…]
To test whether honeybees produce sizable changes in the electric field of our atmosphere, the researchers placed an electric field monitor and a camera near the site of several honeybee colonies. In the 3 minutes that the insects flooded into the air, the researchers found that the potential gradient above the hives increased to 100 volts per meter. In other swarming events, the scientists measured the effect as high as 1,000 volts per meter, making the charge density of a large honeybee swarm roughly six times greater than electrified dust storms and eight times greater than a stormcloud.
The scientists also found that denser insect clouds meant bigger electrical fields — an observation that enabled them to model other swarming insects such as locusts and butterflies.
Locusts often swarm to “biblical scales,” the scientists said, creating thick clouds 460 square miles (1,191 square kilometers) in size and packing up to 80 million locusts into less than half a square mile (1.3 square km). The researchers’ model predicted that swarming locusts’ effect on the atmospheric electric field was staggering, generating densities of electric charge similar to those made by thunderstorms.
The researchers say it’s unlikely the insects are producing storms themselves, but even when potential gradients don’t meet the conditions to make lightning, they can still have other effects on the weather. Electric fields in the atmosphere can ionize particles of dust and pollutants, changing their movement in unpredictable ways. As dust can scatter sunlight, knowing how it moves and where it settles is important to understanding a region’s climate.
Researchers have succeeded in growing brain cells in a lab and hooking them up to electronic connectors proving they can learn to play the seminal console game Pong.
Led by Brett Kagan, chief scientific officer at Cortical Labs, the researchers showed that by integrating neurons into digital systems they could harness “the inherent adaptive computation of neurons in a structured environment”.
According to the paper published in the journal Neuron, the biological neural networks grown from human or rodent origins were integrated with computing hardware via a high-density multielectrode array.
“Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game Pong.
“Applying implications from the theory of active inference via the free energy principle, we find apparent learning within five minutes of real-time gameplay not observed in control conditions,” the paper said. “Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time.”
[…]
Researchers have succeeded in growing brain cells in a lab and hooking them up to electronic connectors proving they can learn to play the seminal console game Pong.
Led by Brett Kagan, chief scientific officer at Cortical Labs, the researchers showed that by integrating neurons into digital systems they could harness “the inherent adaptive computation of neurons in a structured environment”.
According to the paper published in the journal Neuron, the biological neural networks grown from human or rodent origins were integrated with computing hardware via a high-density multielectrode array.
“Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game Pong.
“Applying implications from the theory of active inference via the free energy principle, we find apparent learning within five minutes of real-time gameplay not observed in control conditions,” the paper said. “Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time.”
This installation enables a live plant to control a machete. plant machete has a control system that reads and utilizes the electrical noises found in a live philodendron. The system uses an open source micro-controller connected to the plant to read varying resistance signals across the plant’s leaves. Using custom software, these signals are mapped in real-time to the movements of the joints of the industrial robot holding a machete. In this way, the movements of the machete are determined based on input from the plant. Essentially the plant is the brain of the robot controlling the machete determining how it swings, jabs, slices and interacts in space.
When ice sheets melt, something strange and highly counterintuitive happens to sea levels.
It works basically like a seesaw. In the area close to where theses masses of glacial ice melt, ocean levels fall. Yet thousands of miles away, they actually rise. It largely happens because of the loss of a gravitational pull toward the ice sheet, causing the water to disperse away. The patterns have come to be known as sea level fingerprints since each melting glacier or ice sheet uniquely impacts sea level. Elements of the concept—which lies at the heart of the understanding that global sea levels don’t rise uniformly—have been around for over a century and modern sea level science has been built around it. But there’s long been a hitch to the widely accepted theory. A sea level fingerprint has never definitively been detected by researchers.
A team of scientists—led by Harvard alumna Sophie Coulson and featuring Harvard geophysicist Jerry X. Mitrovica—believe they have detected the first. The findings are described in a new study published Thursday in Science. The work validates almost a century of sea level science and helps solidify confidence in models predicting future sea level rise.
[…]
Sea level fingerprints have been notoriously difficult to detect because of the major fluctuations in ocean levels brought on by changing tides, currents, and winds. What makes it such a conundrum is that researchers are trying to detect millimeter level motions of the water and link them to melting glaciers thousands of miles away.
[…]
The new study uses newly released satellite data from a European marine monitoring agency that captures over 30 years of observations in the vicinity of the Greenland Ice Sheet and much of the ocean close to the middle of Greenland to capture the seesaw in ocean levels from the fingerprint.
The satellite data caught the eye of Mitrovica and colleague David Sandwell of the Scripps Institute of Oceanography. Typically, satellite records from this region had only extended up to the southern tip of Greenland, but in this new release the data reached ten degrees higher in latitude, allowing them to eyeball a potential hint of the seesaw caused by the fingerprint.
[…]
Coulson quickly collected three decades worth of the best observations she could find on ice height change within the Greenland Ice Sheet as well as reconstructions of glacier height change across the Canadian Arctic and Iceland. She combined these different datasets to create predictions of sea level change in the region from 1993 to 2019, which she then compared with the new satellite data. The fit was perfect. A one-to-one match that showed with more than 99.9% confidence that the pattern of sea level change revealed by the satellites is a fingerprint of the melting ice sheet.
Tiny nets woven from DNA strands cover the spike proteins of the virus that causes COVID-19 and give off a glowing signal in this artist’s rendering. Credit: Xing Wang, University of Illinois
Tiny nets woven from DNA strands can ensnare the spike protein of the virus that causes COVID-19, lighting up the virus for a fast-yet-sensitive diagnostic test—and also impeding the virus from infecting cells, opening a new possible route to antiviral treatment, according to a new study.
Researchers at the University of Illinois Urbana-Champaign and collaborators demonstrated the DNA nets’ ability to detect and impede COVID-19 in human cell cultures in a paper published in the Journal of the American Chemical Society.
“This platform combines the sensitivity of PCR and the speed and low cost of antigen tests,” said study leader Xing Wang, a professor of bioengineering and of chemistry at Illinois. “We need tests like this for a couple of reasons. One is to prepare for the next pandemic. The other reason is to track ongoing viral epidemics—not only coronaviruses, but also other deadly and economically impactful viruses like HIV or influenza.”
DNA is best known for its genetic properties, but it also can be folded into custom nanoscale structures that can perform functions or specifically bind to other structures much like proteins do. The DNA nets the Illinois group developed were designed to bind to the coronavirus spike protein—the structure that sticks out from the surface of the virus and binds to receptors on human cells to infect them. Once bound, the nets give off a fluorescent signal that can be read by an inexpensive handheld device in about 10 minutes.
The researchers demonstrated that their DNA nets effectively targeted the spike protein and were able to detect the virus at very low levels, equivalent to the sensitivity of gold-standard PCR tests that can take a day or more to return results from a clinical lab.
The technique holds several advantages, Wang said. It does not need any special preparation or equipment, and can be performed at room temperature, so all a user would do is mix the sample with the solution and read it. The researchers estimated in their study that the method would cost $1.26 per test.
“Another advantage of this measure is that we can detect the entire virus, which is still infectious, and distinguish it from fragments that may not be infectious anymore,” Wang said. This not only gives patients and physicians better understanding of whether they are infectious, but it could greatly improve community-level modeling and tracking of active outbreaks, such as through wastewater.
In addition, the DNA nets inhibited the virus’s spread in live cell cultures, with the antiviral activity increasing with the size of the DNA net scaffold. This points to DNA structures’ potential as therapeutic agents, Wang said.
“I had this idea at the very beginning of the pandemic to build a platform for testing, but also for inhibition at the same time,” Wang said. “Lots of other groups working on inhibitors are trying to wrap up the entire virus, or the parts of the virus that provide access to antibodies. This is not good, because you want the body to form antibodies. With the hollow DNA net structures, antibodies can still access the virus.”
The DNA net platform can be adapted to other viruses, Wang said, and even multiplexed so that a single test could detect multiple viruses.
“We’re trying to develop a unified technology that can be used as a plug-and-play platform. We want to take advantage of DNA sensors’ high binding affinity, low limit of detection, low cost and rapid preparation,” Wang said.
The paper is titled “Net-shaped DNA nanostructures designed for rapid/sensitive detection and potential inhibition of the SARS-CoV-2 virus.”
More information: Neha Chauhan et al, Net-Shaped DNA Nanostructures Designed for Rapid/Sensitive Detection and Potential Inhibition of the SARS-CoV-2 Virus, Journal of the American Chemical Society (2022). DOI: 10.1021/jacs.2c04835