Debashis Chanda, a nanoscience researcher with the University of Central Florida, and his team have created a way to mimic nature’s ability to reflect light and create beautifully vivid color without absorbing any heat like traditional pigments do.
Chanda’s research, published in the journal Science Advances, explains and explores structural color and how people could use it to live cooler in a rapidly warming world.
Structural colors are created not from traditional pigmentation but from the arrangement of colorless materials to reflect light in certain ways. This process is how rainbows are made after it rains and how suncatchers bend light to create dazzling displays of color.
[…]
One driver for the researchers: A desire to avoid toxic materials
To create these colors, synthetic materials like heavy metals are used to create vivid paints.
“We use a lot of artificially synthesized organic molecules, lots of metal,” Chanda told NPR. “Think about your deep blues, you need cobalt, a deep red needs cadmium. They are toxic. We are polluting our nature and our whole habitat by using this kind of paint. So one of the major motivations for us was to create a color based on non-toxic material.”
So why can’t we simply use ground-up peacock feathers to recreate its vivid greens, blues and golds? It’s because they have no pigment. Some of the brightest colors in nature aren’t pigmented at all, peacock feathers included.
These bright, beautiful colors are achieved by the bending and reflection of light. The way the structure of a wing, a feather or other material reflects light back at the viewer. It doesn’t absorb any light, it beams it back out in the form of a visible color, and this is where things get interesting.
Chanda’s research began here, with his fascination with natural colors and how they are achieved in nature.
Beyond just the beautiful arrays of color that structure can create, Chanda also found that unlike pigments, structural paint does not absorb any infrared light.
Infrared light is the reason black cars get hot on sunny days and asphalt is hot to the touch in summer. Infrared light is absorbed as heat energy into these surfaces — the darker the color, the more the surface colored with it can absorb. That’s why people are advised to wear lighter colors in hotter climates and why many buildings are painted bright whites and beiges.
Chanda found that structural color paint does not absorb any heat. It reflects all infrared light back out. This means that in a rapidly warming climate, this paint could help communities keep cool.
Chanda and his team tested the impact this paint had on the temperature of buildings covered in structural paint versus commercial paints and they found that structural paint kept surfaces 20 to 30 degrees cooler.
This, Chanda said, is a massive new tool that could be used to fight rising temperatures caused by global warming while still allowing us to have a bright and colorful world.
Unlike white and black cars, structural paint’s ability to reflect heat isn’t determined by how dark the color is. Blue, black or purple structural paints reflect just as much heat as bright whites or beige. This opens the door for more colorful, cooler architecture and design without having to worry about the heat.
A little paint goes a long way
It’s not just cleaner, Chanda said. Structural paint weighs much less than pigmented paint and doesn’t fade over time like traditional pigments.
“A raisin’s worth of structural paint is enough to cover the front and back of a door,” he said.
Unlike pigments which rely on layers of pigment to achieve depth of color, structural paint only requires one thin layer of particles to fully cover a surface in color. This means that structural paint could be a boon for aerospace engineers who rely on the lowest weight possible to achieve higher fuel efficiency.
Scientists using data from the Atacama Cosmology Telescope in Chile have made a detailed map of dark matter’s distribution across a quarter of the sky.
The map shows regions the distribution of mass extending essentially as far we can see back in time; it uses the cosmic microwave background as a backdrop for the dark matter portrait. The team’s research will be presented at the Future Science with CMB x LSS conference in Kyoto, Japan.
“We have mapped the invisible dark matter across the sky to the largest distances, and clearly see features of this invisible world that are hundreds of millions of light-years across,” said Blake Sherwin, a cosmologist at the University of Cambridge, in a Princeton University release. “It looks just as our theories predict.”
[…]
the only way dark matter is observed is indirectly, in the way its gravitational effects are observed at large scales. Enter the Atacama Cosmology Telescope, which more precisely dated the universe in 2021. The telescope’s map builds on a map of the universe’s matter released earlier this year, which was produced using data from the Dark Energy Survey and the South Pole Telescope. That map upheld previous estimations of the ratio of ordinary matter to dark matter and found that the distribution of the matter was less clumpy than previously thought.
The new map homes in on a lingering concern of Einstein’s general relativity: how the most massive objects in the universe, like supermassive black holes, bend light from more distant sources. One such source is the cosmic microwave background, the most ancient detectable light, which radiates from the aftermath of the Big Bang.
The researchers effectively used the background as a backlight, to illuminate regions of greater density in the universe.
“It’s a bit like silhouetting, but instead of just having black in the silhouette, you have texture and lumps of dark matter, as if the light were streaming through a fabric curtain that had lots of knots and bumps in it,” said Suzanne Staggs, director of the Atacama Cosmology Telescope and a physicist at Princeton, in the university release.
The cosmic microwave background as seen by the European Space Agency’s Planck observatory.
“The famous blue and yellow CMB image is a snapshot of what the universe was like in a single epoch, about 13 billion years ago, and now this is giving us the information about all the epochs since,” Staggs added.
The recent analysis suggests that the dark matter was lumpy enough to fit with the standard model of cosmology, which relies on Einstein’s theory of gravity.
Eric Baxter, an astronomer at the University of Hawai’i and a co-author of the research that resulted in the February dark matter map, told Gizmodo in an email that his team’s map was sensitive to low-redshifts (meaning close by, in the more recent universe). On the other hand, the newer map focuses exclusively on the lensing of the cosmic microwave background, meaning higher redshifts and a more sweeping scale.
“Said another way, our measurements and the new ACT measurements are probing somewhat different (and complementary) aspects of the matter distribution,” Baxter said. “Thus, rather than contradicting our previous results, the new results may be providing an important new piece of the puzzle about possible discrepancies with our standard cosmological model.”
“Perhaps the Universe is less lumpy than expected on small scales and at recent times (i.e. the regime probed by our analysis), but is consistent with expectations at earlier times and at larger scales,” Baxter added.
New instruments should help tease out the matter distribution of the universe. An upcoming telescope at the Simons Observatory in the Atacama is set to begin operations in 2024 and will map the sky nearly 10 times faster than the Atacama Cosmology Telescope, according to the Princeton release.
Researchers have discovered that in the exotic conditions of the early universe, waves of gravity may have shaken space-time so hard that they spontaneously created radiation.
[…]
a team of researchers have discovered that an exotic form of parametric resonance may have even occurred in the extremely early universe.
Perhaps the most dramatic event to occur in the entire history of the universe was inflation. This is a hypothetical event that took place when our universe was less than a second old. During inflation our cosmos swelled to dramatic proportions, becoming many orders of magnitude larger than it was before. The end of inflation was a very messy business, as gravitational waves sloshed back and forth throughout the cosmos.
Normally gravitational waves are exceedingly weak. We have to build detectors that are capable of measuring distances less than the width of an atomic nucleus to find gravitational waves passing through the Earth. But researchers have pointed out that in the extremely early universe these gravitational waves may have become very strong.
And they may have even created standing wave patterns where the gravitational waves weren’t traveling but the waves stood still, almost frozen in place throughout the cosmos. Since gravitational waves are literally waves of gravity, the places where the waves are the strongest represent an exceptional amount of gravitational energy.
The researchers found that this could have major consequences for the electromagnetic field existing in the early universe at that time. The regions of intense gravity may have excited the electromagnetic field enough to release some of its energy in the form of radiation, creating light.
This result gives rise to an entirely new phenomenon: the production of light from gravity alone. There’s no situation in the present-day universe that could allow this process to happen, but the researchers have shown that the early universe was a far stranger place than we could possibly imagine.
The experiment relies on materials that can change their optical properties in fractions of a second, which could be used in new technologies or to explore fundamental questions in physics.
The original double-slit experiment, performed in 1801 by Thomas Young at the Royal Institution, showed that light acts as a wave. Further experiments, however, showed that light actually behaves as both a wave and as particles – revealing its quantum nature.
These experiments had a profound impact on quantum physics, revealing the dual particle and wave nature of not just light, but other ‘particles’ including electrons, neutrons, and whole atoms.
Now, a team led by Imperial College London physicists has performed the experiment using ‘slits’ in time rather than space. They achieved this by firing light through a material that changes its properties in femtoseconds (quadrillionths of a second), only allowing light to pass through at specific times in quick succession.
Lead researcher Professor Riccardo Sapienza, from the Department of Physics at Imperial, said: “Our experiment reveals more about the fundamental nature of light while serving as a stepping-stone to creating the ultimate materials that can minutely control light in both space and time.”
Details of the experiment are published today in Nature Physics.
[…]
The material the team used was a thin film of indium-tin-oxide, which forms most mobile phone screens. The material had its reflectance changed by lasers on ultrafast timescales, creating the ‘slits’ for light. The material responded much quicker than the team expected to the laser control, varying its reflectivity in a few femtoseconds.
The material is a metamaterial – one that is engineered to have properties not found in nature. Such fine control of light is one of the promises of metamaterials, and when coupled with spatial control, could create new technologies and even analogues for studying fundamental physics phenomena like black holes.
Co-author Professor Sir John Pendry said: “The double time slits experiment opens the door to a whole new spectroscopy capable of resolving the temporal structure of a light pulse on the scale of one period of the radiation.”
The team next want to explore the phenomenon in a ‘time crystal’, which is analogous to an atomic crystal, but where the optical properties vary in time.
Co-author Professor Stefan Maier said: “The concept of time crystals has the potential to lead to ultrafast, parallelized optical switches.”
What does a stressed plant sound like? A bit like bubble-wrap being popped. Researchers in Israel report in the journal Cell on March 30 that tomato and tobacco plants that are stressed—from dehydration or having their stems severed—emit sounds that are comparable in volume to normal human conversation. The frequency of these noises is too high for our ears to detect, but they can probably be heard by insects, other mammals, and possibly other plants.
“Even in a quiet field, there are actually sounds that we don’t hear, and those sounds carry information,” says senior author Lilach Hadany, an evolutionary biologist and theoretician at Tel Aviv University. “There are animals that can hear these sounds, so there is the possibility that a lot of acoustic interaction is occurring.”
Although ultrasonic vibrations have been recorded from plants before, this is the first evidence that they are airborne, a fact that makes them more relevant for other organisms in the environment. “Plants interact with insects and other animals all the time, and many of these organisms use sound for communication, so it would be very suboptimal for plants to not use sound at all,” says Hadany.
The researchers used microphones to record healthy and stressed tomato and tobacco plants, first in a soundproofed acoustic chamber and then in a noisier greenhouse environment. They stressed the plants via two methods: by not watering them for several days and by cutting their stems. After recording the plants, the researchers trained a machine-learning algorithm to differentiate between unstressed plants, thirsty plants, and cut plants.
The team found that stressed plants emit more sounds than unstressed plants. The plant sounds resemble pops or clicks, and a single stressed plant emits around 30–50 of these clicks per hour at seemingly random intervals, but unstressed plants emit far fewer sounds. “When tomatoes are not stressed at all, they are very quiet,” says Hadany.
00:00
00:36
An audio recording of plant sounds. The frequency was lowered so that it is audible to human ears. Credit: Khait et al.
Water-stressed plants began emitting noises before they were visibly dehydrated, and the frequency of sounds peaked after five days with no water before decreasing again as the plants dried up completely. The types of sound emitted differed with the cause of stress. The machine-learning algorithm was able to accurately differentiate between dehydration and stress from cutting and could also discern whether the sounds came from a tomato or tobacco plant.
Although the study focused on tomato and tobacco plants because of their ease to grow and standardize in the laboratory, the research team also recorded a variety of other plant species. “We found that many plants—corn, wheat, grape, and cactus plants, for example—emit sounds when they are stressed,” says Hadany.
A photo of a cactus being recorded. Credit: Itzhak Khait
The exact mechanism behind these noises is unclear, but the researchers suggest that it might be due to the formation and bursting of air bubbles in the plant’s vascular system, a process called cavitation.
Whether or not the plants are producing these sounds in order to communicate with other organisms is also unclear, but the fact that these sounds exist has big ecological and evolutionary implications. “It’s possible that other organisms could have evolved to hear and respond to these sounds,” says Hadany. “For example, a moth that intends to lay eggs on a plant or an animal that intends to eat a plant could use the sounds to help guide their decision.”
Other plants could also be listening in and benefiting from the sounds. We know from previous research that plants can respond to sounds and vibrations: Hadany and several other members of the team previously showed that plants increase the concentration of sugar in their nectar when they “hear” the sounds made by pollinators, and other studies have shown that plants change their gene expression in response to sounds. “If other plants have information about stress before it actually occurs, they could prepare,” says Hadany.
An illustration of a dehydrated tomato plant being recorded using a microphone. Credit: Liana Wait
Sound recordings of plants could be used in agricultural irrigation systems to monitor crop hydration status and help distribute water more efficiently, the authors say.
“We know that there’s a lot of ultrasound out there—every time you use a microphone, you find that a lot of stuff produces sounds that we humans cannot hear—but the fact that plants are making these sounds opens a whole new avenue of opportunities for communication, eavesdropping, and exploitation of these sounds,” says co-senior author Yossi Yovel, a neuro-ecologist at Tel Aviv University.
“So now that we know that plants do emit sounds, the next question is—’who might be listening?'” says Hadany. “We are currently investigating the responses of other organisms, both animals and plants, to these sounds, and we’re also exploring our ability to identify and interpret the sounds in completely natural environments.”
This is not a newly discovered phenomenon, plants, grasses and trees are very good at detecting and warning, eg
Stressed plants show altered phenotypes, including changes in color, smell, and shape. Yet, the possibility that plants emit airborne sounds when stressed – similarly to many animals – has not been investigated. Here we show, to our knowledge for the first time, that stressed plants emit airborne sounds that can be recorded remotely, both in acoustic chambers and in greenhouses. We recorded ∼65 dBSPL ultrasonic sounds 10 cm from tomato and tobacco plants, implying that these sounds could be detected by some organisms from up to several meters away. We developed machine learning models that were capable of distinguishing between plant sounds and general noises, and identifying the condition of the plants – dry, cut, or intact – based solely on the emitted sounds. Our results suggest that animals, humans, and possibly even other plants, could use sounds emitted by a plant to gain information about the plant’s condition. More investigation on plant bioacoustics in general and on sound emission in plants in particular may open new avenues for understanding plants and their interactions with the environment, and it may also have a significant impact on agriculture.
Source: Plants emit informative airborne sounds under stress
The remarkable ability of plants to respond to their environment has led some scientists to believe it’s a sign of conscious awareness. A new opinion paper argues against this position, saying plants “neither possess nor require consciousness.”
To explain these apparent behaviors, a subset of scientists known as plant neurobiologists has argued that plants possess a form of consciousness. Most notably, evolutionary ecologist Monica Gagliano has performed experiments that allegedly hint at capacities such as habituation (learning from experience) and classical conditioning (like Pavlov’s salivating dogs). In these experiments, plants apparently “learned” to stop curling their leaves after being dropped repeatedly or to spread their leaves in anticipation of a light source. Armed with this experimental evidence, Gagliano and others have claimed, quite controversially, that because plants can learn and exhibit other forms of intelligence, they must be conscious.
Nonsense, argues a new paper published today in Trends in Plant Science. The lead author of the new paper, biologist Lincoln Taiz from the University of California at Santa Cruz, isn’t denying plant intelligence, but makes a strong case against their being conscious.
Ultrasonic acoustic emission (UAE) in trees is often related to collapsing water columns in the flow path as a result of tensions that are too strong (cavitation). However, in a decibel (dB) range below that associated with cavitation, a close relationship was found between UAE intensities and stem radius changes.
UAE was continuously recorded on the stems of mature field-grown trees of Scots pine (Pinus sylvestris) and pubescent oak (Quercus pubescens) at a dry inner-Alpine site in Switzerland over two seasons. The averaged 20-Hz records were related to microclimatic conditions in air and soil, sap-flow rates and stem-radius fluctuations de-trended for growth (ΔW).•
Within a low-dB range (27 ± 1 dB), UAE regularly increased and decreased in a diurnal rhythm in parallel with ΔW on cloudy days and at night. These low-dB emissions were interrupted by UAE abruptly switching between the low-dB range and a high-dB range (36 ± 1 dB) on clear, sunny days, corresponding to the widely supported interpretation of UAE as sound from cavitations.•
It is hypothesized that the low-dB signals in drought-stressed trees are caused by respiration and/or cambial growth as these physiological activities are tissue water-content dependent and have been shown to produce courses of CO2 efflux similar to our courses of ΔW and low-dB UAE.
A mammoth meatball has been created by a cultivated meat company, resurrecting the flesh of the long-extinct animals.
The project aims to demonstrate the potential of meat grown from cells, without the slaughter of animals, and to highlight the link between large-scale livestock production and the destruction of wildlife and the climate crisis.
The mammoth meatball was produced by Vow Food, an Australian company, which is taking a different approach to cultured meat.
There are scores of companies working on replacements for conventional meat, such as chicken, pork and beef. But Vow Foods is aiming to mix and match cells from unconventional species to create new kinds of meat.
The company has already investigated the potential of more than 50 species, including alpaca, buffalo, crocodile, kangaroo, peacocks and different types of fish.
The first cultivated meat to be sold to diners will be Japanese quail, which the company expects will be in restaurants in Singapore this year.
“We have a behaviour change problem when it comes to meat consumption,” said George Peppou, CEO of Vow Food.
“The goal is to transition a few billion meat eaters away from eating [conventional] animal protein to eating things that can be produced in electrified systems.
“And we believe the best way to do that is to invent meat. We look for cells that are easy to grow, really tasty and nutritious, and then mix and match those cells to create really tasty meat.”
Tim Noakesmith, who cofounded Vow Food with Peppou, said: “We chose the woolly mammoth because it’s a symbol of diversity loss and a symbol of climate change.” The creature is thought to have been driven to extinction by hunting by humans and the warming of the world after the last ice age.
[…]
Cultivated meat – chicken from Good Meat – is currently only sold to consumers in Singapore, but two companies have now passed an approval process in the US.
Vow Food worked with Prof Ernst Wolvetang, at the Australian Institute for Bioengineering at the University of Queensland, to create the mammoth muscle protein. His team took the DNA sequence for mammoth myoglobin, a key muscle protein in giving meat its flavour, and filled in the few gaps using elephant DNA.
This sequence was placed in myoblast stem cells from a sheep, which replicated to grow to the 20bn cells subsequently used by the company to grow the mammoth meat.
“It was ridiculously easy and fast,” said Wolvetang. “We did this in a couple of weeks.” Initially, the idea was to produce dodo meat, he said, but the DNA sequences needed do not exist.
[…]
Seren Kell, at the Good Food Institute Europe, said: “I hope this fascinating project will open up new conversations about cultivated meat’s extraordinary potential to produce more sustainable food.
“However, as the most common sources of meat are farm animals such as cattle, pigs, and poultry, most of the sustainable protein sector is focused on realistically replicating meat from these species.
“By cultivating beef, pork, chicken and seafood we can have the most impact in terms of reducing emissions from conventional animal agriculture.”
Computer scientists found the holy grail of tiles. They call it the “einstein,” one shape that alone can cover a plane without ever repeating a pattern.
And all it takes for this special shape is 13 sides.
In the world of mathematics, an “aperiodic monotile”—also known as an einstein based off a German phrase for one stone—is a shape that can tile a plane, but never repeat.
“In this paper we present the first true aperiodic monotile, a shape that forces aperiodicity through geometry alone, with no additional constrains applied via matching conditions,” writes Craig Kaplan, a computer science professor from the University of Waterloo and one of the four authors of the paper. “We prove that this shape, a polykite that we call ‘the hat,’ must assemble into tilings based on a substitution system.”
[…]
The history of the aperiodic tile has never had a breakthrough like this one. The first aperiodic sets had over 20,000 tiles, Kaplan tweets. “Subsequent research lowered that number, to sets of size 92, then six, and then two in the form of the famous Penrose tiles.” But those Penrose tiles were from 1974.
[…]
The team proved the nature of the shape through computer coding, and in a fascinating aside, the shape doesn’t lose its aperiodic nature even when the length of sides changes.
The number of exploding stars (supernovae) has significantly influenced marine life’s biodiversity during the last 500 million years. This is the essence of a new study published in Ecology and Evolution by Henrik Svensmark, DTU space.
Extensive studies of the fossil record have shown that the diversity of life forms has varied significantly over geological time, and a fundamental question of evolutionary biology is which processes are responsible for these variations.
The new study reveals a major surprise: The varying number of nearby exploding stars (supernovae) closely follows changes in marine genera (the taxonomic rank above species) biodiversity during the last 500 million years. The agreement appears after normalizing the marine diversity curve by the changes in shallow marine areas along the continental coasts.
Shallow marine shelves are relevant since most marine life lives in these areas, and changes in shelf areas open new regions where species can evolve. Therefore, changes in available shallow areas influence biodiversity.
“A possible explanation for the supernova-diversity link is that supernovae influence Earth’s climate,” says Henrik Svensmark, author of the paper and senior researcher at DTU Space.
“A high number of supernovae leads to a cold climate with a large temperature difference between the equator and polar regions. This results in stronger winds, ocean mixing, and transportation of life-essential nutrients to the surface waters along the continental shelves.”
Variations in relative supernova history (black curve) compared with genera-level diversity curves normalized with the area of shallow marine margins (shallow areas along the coasts). The brown and light green curves are major marine animals’ genera-level diversity. The orange is marine invertebrate genera-level diversity. Finally, the dark green curve is all marine animals’ genera-level diversity. Abbreviations for geological periods are Cm Cambrian, O Ordovician, S Silurian, D Devonian, C Carboniferous, P Permian, Tr Triassic, J Jurassic, K Cretaceous, Pg Palaeogene, Ng Neogene. Credit: Henrik Svensmark, DTU Space
The paper concludes that supernovae are vital for primary bioproductivity by influencing the transport of nutrients. Gross primary bioproductivity provides energy to the ecological systems, and speculations have suggested that changes in bioproductivity may influence biodiversity. The present results are in agreement with this hypothesis.
“The new evidence points to a connection between life on Earth and supernovae, mediated by the effect of cosmic rays on clouds and climate,” says Henrik Svensmark.
When heavy stars explode, they produce cosmic rays, which are elementary particles with enormous energies. Cosmic rays travel to our solar system, where some end their journey by colliding with Earth’s atmosphere. Previous studies by Henrik Svensmark and colleagues referenced below show that they become the primary source of ions help form and grow aerosols required in cloud formation.
Since clouds can regulate the solar energy reaching Earth’s surface, the cosmic-ray-aerosol-cloud influences climate. Evidence shows substantial climate shifts when the intensity of cosmic rays changes by several hundred percent over millions of years.
More information: Henrik Svensmark, A persistent influence of supernovae on biodiversity over the Phanerozoic, Ecology and Evolution (2023). DOI: 10.1002/ece3.9898
Henrik Svensmark, Supernova Rates and Burial of Organic Matter, Geophysical Research Letters (2022). DOI: 10.1029/2021GL096376
Svensmark, H. and Friis-Christensen, E., Variation of Cosmic Ray Flux and Global Cloud Coverage -A missing Link in Solar-Climate Relationships, Journal of Atmospheric and Terrestrial Physics, 59, 1225, (1997)
Nir J. Shaviv et al, The Phanerozoic climate, Annals of the New York Academy of Sciences (2022). DOI: 10.1111/nyas.14920
Henrik Svensmark, Evidence of nearby supernovae affecting life on Earth, Monthly Notices of the Royal Astronomical Society (2012). DOI: 10.1111/j.1365-2966.2012.20953.x
[…] The invention, by a University of Bristol physicist, who gave it the name “counterportation,” provides the first-ever practical blueprint for creating in the lab a wormhole that verifiably bridges space, as a probe into the inner workings of the universe.
By deploying a novel computing scheme, revealed in the journal Quantum Science and Technology, which harnesses the basic laws of physics, a small object can be reconstituted across space without any particles crossing. Among other things, it provides a “smoking gun” for the existence of a physical reality underpinning our most accurate description of the world.
[…]
Hatim said, “Here’s the sharp distinction. While counterportation achieves the end goal of teleportation, namely disembodied transport, it remarkably does so without any detectable information carriers traveling across.”
[…]
“If counterportation is to be realized, an entirely new type of quantum computer has to be built: an exchange-free one, where communicating parties exchange no particles,” Hatim said.
[…]
“The goal in the near future is to physically build such a wormwhole in the lab, which can then be used as a testbed for rival physical theories, even ones of quantum gravity,” Hatim added.
[…]
John Rarity, professor of optical communication systems at the University of Bristol, said, “We experience a classical world which is actually built from quantum objects. The proposed experiment can reveal this underlying quantum nature showing that entirely separate quantum particles can be correlated without ever interacting. This correlation at a distance can then be used to transport quantum information (qbits) from one location to another without a particle having to traverse the space, creating what could be called a traversable wormhole.”
More information: Hatim Salih, From counterportation to local wormholes, Quantum Science and Technology (2022). DOI: 10.1088/2058-9565/ac8ecd
[…] “The image was captured on a summer evening in São José dos Campos [in São Paulo state] while a negatively charged lightning bolt was nearing the ground at 370 km per second. When it was a few dozen meters from ground level, lightning rods and tall objects on the tops of nearby buildings produced positive upward discharges, competing to connect to the downward strike. The final image prior to the connection was obtained 25 thousandths of a second before the lightning hit one of the buildings,” Saba said.
He used a camera that takes 40,000 frames per second. When the video is played back in slow motion, it shows how lightning discharges behave and also how dangerous they can be if the protection system is not properly installed: Although there are more than 30 lightning rods in the vicinity, the strike connected not to them but to a smokestack on top of one of the buildings. “A flaw in the installation left the area unprotected. The impact of a 30,000-amp discharge did enormous damage,” he said.
[…]
Lightning strikes branch out as the electrical charges seek the path of least resistance, rather than the shortest path, which would be a straight line. The path of least resistance, usually a zigzag, is determined by different electrical characteristics of the atmosphere, which is not homogeneous. “A lightning strike made up of several discharges can last up to 2 seconds. However, each discharge lasts only fractions of milliseconds,” Saba said.
Lightning rods neither attract nor repel strikes, he added. Nor do they “discharge” clouds, as used to be believed. They simply offer lightning an easy and safe route to the ground.
Because it is not always possible to rely on the protection of a lightning rod, and most atmospheric discharges occur in summer in the tropics, it is worth considering Saba’s advice. “Storms are more frequent in the afternoon than in the morning, so be careful about outdoor activities on summer afternoons. Find shelter if you hear thunder, but never under a tree or pole, and never under a rickety roof,” he said.
“If you can’t find a safe place to shelter, stay in the car and wait for the storm to blow over. If no car or other shelter is available, squat down with your feet together. Don’t stand upright or lie flat. Indoors, avoid contact with appliances and fixed-line telephones.”
It is possible to survive being struck by lightning, and there are many examples. The odds increase if the person receives care quickly. “Cardiac arrest is the only cause of death. In this case, cardiopulmonary resuscitation is the recommended treatment,” Saba said.
Saba began systematically studying lightning with high-speed cameras in 2003, ever since building a collection of videos of lightning filmed at high speed that has become the world’s largest.
More information: Marcelo M. F. Saba et al, Close View of the Lightning Attachment Process Unveils the Streamer Zone Fine Structure, Geophysical Research Letters (2022). DOI: 10.1029/2022GL101482
[…] An interdisciplinary team of scientists have released a complete reconstruction and analysis of a larval fruit fly’s brain, published Thursday in the journal Science. The resulting map, or connectome, as its called in neuroscience, includes each one of the 3,016 neurons and 548,000 of the synapses running between neurons that make up the baby fly’s entire central nervous system. The connectome includes both of the larva’s brain lobes, as well as the nerve cord.
[…]
It is the most complete insect brain map ever constructed and the most intricate entire connectome of any animal ever published. In short: it’s a game changer.
For some, it represents a paradigm shift in the field of neuroscience. Fruit flies are model organisms, and many neural structures and pathways are thought to be conserved across evolution. What’s true for maggots might well be true for other insects, mice, or even humans.
[…]
this new connectome is like going from a blurry satellite view to a crisp city street map. On the block-by-block array of an insect’s cortex, “now we know where every 7-11 and every, you know, Target [store] is,” Mosca said.
To complete the connectome, a group of Cambridge University scientists spent 12 years focusing in on the brain of a single, 6-hour-old female fruit fly larva. The organ, approximately 170 by 160 by 70 micrometers in size, is truly teeny—within an order of magnitude of things too small to see with a naked eye. Yet, the researchers were able to use electron microscopy to visually cut it into thousands of slices, only nanometers thick. Imaging just one of the neurons took a day, on average. From there, once the physical map of the neurons, or “brain volume,” was complete, the analysis began.
Along with computer scientists at Johns Hopkins University, the Cambridge neuroscientists assessed and categorized the neurons and synapses they’d found. The JHU researchers fine-tuned a computer program for this exact application in order to determine cell and synapse types, patterns within the brain connections, and to chart some function onto the larva connectome—based on previous neuroscience studies of behavior and sensory systems.
They found many surprises. For one, the larval fly connectome showed numerous neural pathways that zigzagged between hemispheres, demonstrating just how integrated both sides of the brain are and how nuanced signal processing can be, said Michael Winding, one of the study’s lead researchers and a Cambridge University neuroscientist, in a video call. “I never thought anything would look like that,” Winding said.
[…]
Fascinatingly, these recurrent structures mapped from an actual brain appear to closely match the architecture of some artificial intelligence models (called residual neural networks), with nested pathways allowing for different levels of complexity, Zlatic noted.
[…]
Not only was the revealed neural structure layered, but the neurons themselves appear to be multi-faceted. Sensory cells connected across modalities—visual, smell, and other inputs crossed and interacted en route to output cells, explained Zlatic. “This brain does a huge amount of multi-sensory integration…which is computationally a very powerful thing,” she added.
Then there were the types and relative quantities of cell-to-cell connections. In neuroscience, the classic “canonical” type of synapse runs from an axon to a dendrite. Yet, within the mapped larval fly brain, that only accounted for about two-thirds of the connections, Winding and Zlatic said. Axons linked to axons, dendrites to dendrites, and dendrites to axons. Scientists have known these sorts of connections exist in animal nervous systems, but the scope went far beyond what they expected. “Given the breadth of these connections, they must be important for brain computation,” Winding noted. We just don’t know exactly how.
Drop a flat piece of paper and it will flutter and tumble through the air as it falls, but a well-fashioned paper airplane will glide smoothly. Although these structures look simple, their aerodynamics are surprisingly complex. Researchers at New York University’s Courant Institute of Mathematical Sciences conducted a series of experiments involving paper airplanes to explore this transition and develop a mathematical model to predict flight stability, according to a March paper published in the Journal of Fluid Mechanics.
“The study started with simple curiosity about what makes a good paper airplane and specifically what is needed for smooth gliding,” said co-author Leif Ristroph. “Answering such basic questions ended up being far from child’s play. We discovered that the aerodynamics of how paper airplanes keep level flight is really very different from the stability of conventional airplanes.”
Nobody knows who invented the first paper airplane, but China began making paper on a large scale around 500 BCE, with the emergence of origami and paper-folding as a popular art form between 460 and 390 BCE. Paper airplanes have long been studied as a means of learning more about the aerodynamics of flight. For instance, Leonardo da Vinci famously built a model plane out of parchment while dreaming up flying machines and used paper models to test his design for an ornithopter. In the 19th century, British engineer and inventor Sir George Cayley—sometimes called the “father of aviation”—studied the gliding performance of paper airplanes to design a glider capable of carrying a human.
An amusing “scientist playing with paper planes” anecdote comes from physicist Theodore von Kármán. In his 1967 memoir The Wind and Beyond, he recalled a formal 1924 banquet in Delft, The Netherlands, where fellow physicist Ludwig Prandtl constructed a paper airplane out of a menu to demonstrate the mechanics of flight to von Kármán’s sister, who was seated next to him. When he threw the paper plane, “It landed on the shirtfront of the French minister of education, much to the embarrassment of my sister and others at the banquet,” von Kármán wrote.
Enlarge/ Flight motions of paper airplanes with different center of mass locations.
NYU Applied Mathematics Laboratory
While scientists have clearly made great strides in aerodynamics—particularly about aircraft—Ristroph et al. noted that there was not a good mathematical model for predicting the simpler, subtler gliding flight of paper airplanes. It was already well-known that displacing the center of mass results in various flight trajectories, some more stable than others. “The key criterion of a successful glider is that the center of mass must be in the ‘just right’ place,” said Ristroph. “Good paper airplanes achieve this with the front edge folded over several times or by an added paper clip, which requires a little trial and error.”
He and his team verified this by test-flying various rectangular sheets of paper, changing the front weight by adding thin metallic tape to one edge. They found that an unweighted sheet tumbled end over end while descending left to right under the force of gravity. Adding a small weight to shift the center of mass slightly forward also produced a tumbling trajectory. Overall, they found that flyers with greater front-loading produced erratic trajectories full of swoops, climbs, flips, and dives.
The next step was to conduct more controlled and systematic experiments. Ristroph et al. decided to work with thin plastic plates “flying” through a large glass tank of water. The plates were laser-cut from an acrylic plastic sheet, along with two smaller “fins” embedded with lead weights to displace the center of mass, and they also serve as aerodynamic stabilizers. There were 17 plastic plates, each with a different center of mass. Each was released into the tank by sliding it down a short ramp, and the team recorded its free-flight motion through the water.
Enlarge/ Trajectories of plates falling through water, where the different colors represent different degrees of front weighting. Only the “just right” weight distribution leads to the smooth gliding shown in blue.
NYU Applied Mathematics Laboratory
They found the same dynamics played out. If the weight was centered, or nearly so, at the center of the wing, the plate would flutter and tumble erratically. Displace the center of mass too far toward one edge, and the plate would rapidly nosedive and crash. The proverbial “sweet spot” was placing the weight between those extremes. In that case, the aerodynamic force on the plane’s wing will push the wing back down if it moves upward, and push the wing back up if it moves downward. In other words, the center of pressure will vary with the angle of flight, thereby ensuring stability.
This differs substantially from conventional aircraft, which rely on airfoils—structures designed to generate lift. “The effect we found in paper airplanes does not happen for the traditional airfoils used as aircraft wings, whose center of pressure stays fixed in place across the angles that occur in flight,” said Ristroph. “The shifting of the center of pressure thus seems to be a unique property of thin, flat wings, and this ends up being the secret to the stable flight of paper airplanes. This is why airplanes need a separate tail wing as a stabilizer while a paper plane can get away with just a main wing that gives both lift and stability.”
The team also developed a mathematical model as a “flight simulator” to reproduce those motions. Ristroph et al. think their findings will prove useful in small-scale flight applications like drones or flying robots, which often require a more minimal design with no need for many extra flight surfaces, sensors, and controllers. The authors also note that the same strategy might be at work in winged plant seeds, some of which also exhibit stable gliding, with the seed serving as the payload to displace the center of mass. In fact, a 1987 study of the flying seeds of the gourd Alsomitra macrocarpa showed a center of mass and glide ratios consistent with the Ristroph group’s optimal gliding requirements.
Geology textbooks almost inevitably include a cutaway diagram of the Earth showing four neatly delineated layers: a thin outer shell of rock that we live on known as the crust; the mantle, where rocks flow like an extremely viscous liquid, driving the movement of continents and the lifting of mountains; a liquid outer core of iron and nickel that generates the planet’s magnetic field; and a solid inner core. Analyzing the crisscrossing of seismic waves from large earthquakes, two Australian scientists say there is a distinctly different layer at the very center of the Earth. “We have now confirmed the existence of the innermost inner core,” said one of the scientists, Hrvoje Tkalcic, a professor of geophysics at the Australian National University in Canberra.
Dr. Tkalcic and Thanh-Son Pham, a postdoctoral researcher, estimate that the innermost inner core is about 800 miles wide; the entire inner core is about 1,500 miles wide. Their findings were published on Tuesday in the journal Nature Communications. While the cutaway diagram appears to depict clear-cut divisions, knowledge about the deep interior of Earth is unavoidably fuzzy. It is nearly 4,000 miles to the center of Earth, and it is impossible to drill more than a few miles into the crust. Most of what is known about what lies beneath comes from seismic waves — the vibrations of earthquakes traveling through and around the planet. Think of them as a giant sonogram of Earth.
Two Harvard seismologists, Miaki Ishii and Adam Dziewonski, first proposed the idea of the innermost inner core in 2002 based on peculiarities in the speed of seismic waves passing through the inner core. Scientists already knew that the speed of seismic waves traveling through this part of the Earth varied depending on the direction. The waves traveled fastest when going from pole to pole along the Earth’s axis and slowest when traveling perpendicular to the axis. The difference in speeds — a few percent faster along polar paths — arises from the alignment of iron crystals in the inner core, geophysicists believe. But in a small region at the center, the slowest waves were those traveling at a 45-degree angle to the axis instead of 90 degrees, the Harvard seismologists said. The data available then were too sparse to convince everyone.
Think of bringing a pot of water to the boil: As the temperature reaches the boiling point, bubbles form in the water, burst and evaporate as the water boils. This continues until there is no more water changing phase from liquid to steam.
This is roughly the idea of what happened in the very early universe, right after the Big Bang, 13.7 billion years ago.
The idea comes from particle physicists Martin S. Sloth from the Center for Cosmology and Particle Physics Phenomenology at University of Southern Denmark and Florian Niedermann from the Nordic Institute for Theoretical Physics (NORDITA) in Stockholm. Niedermann is a previous postdoc in Sloth’s research group. In this new scientific article, they present an even stronger basis for their idea.
Many bubbles crashing into each other
“One must imagine that bubbles arose in various places in the early universe. They got bigger and they started crashing into each other. In the end, there was a complicated state of colliding bubbles, which released energy and eventually evaporated,” said Martin S. Sloth.
The background for their theory of phase changes in a bubbling universe is a highly interesting problem with calculating the so-called Hubble constant; a value for how fast the universe is expanding. Sloth and Niedermann believe that the bubbling universe plays a role here.
The Hubble constant can be calculated very reliably by, for example, analyzing cosmic background radiation or by measuring how fast a galaxy or an exploding star is moving away from us. According to Sloth and Niedermann, both methods are not only reliable, but also scientifically recognized. The problem is that the two methods do not lead to the same Hubble constant. Physicists call this problem “the Hubble tension.”
Is there something wrong with our picture of the early universe?
“In science, you have to be able to reach the same result by using different methods, so here we have a problem. Why don’t we get the same result when we are so confident about both methods?” said Florian Niedermann.
Sloth and Niedermann believe they have found a way to get the same Hubble constant, regardless of which method is used. The path starts with a phase transition and a bubbling universe—and thus an early, bubbling universe is connected to “the Hubble tension.” “If we assume that these methods are reliable—and we think they are—then maybe the methods are not the problem. Maybe we need to look at the starting point, the basis, that we apply the methods to. Maybe this basis is wrong.”
AI generated illustration of colliding bubbles in the universe. Credit: Birgitte Svennevig, University of Southern Denmark
An unknown dark energy
The basis for the methods is the so-called Standard Model, which assumes that there was a lot of radiation and matter, both normal and dark, in the early universe, and that these were the dominant forms of energy. The radiation and the normal matter were compressed in a dark, hot and dense plasma; the state of the universe in the first 380.000 years after Big Bang.
When you base your calculations on the Standard Model, you arrive at different results for how fast the universe is expanding—and thus different Hubble constants.
But maybe a new form of dark energy was at play in the early universe? Sloth and Niedermann think so.
If you introduce the idea that a new form of dark energy in the early universe suddenly began to bubble and undergo a phase transition, the calculations agree. In their model, Sloth and Niedermann arrive at the same Hubble constant when using both measurement methods. They call this idea New Early Dark Energy—NEDE.
Change from one phase to another—like water to steam
Sloth and Niedermann believe that this new, dark energy underwent a phase transition when the universe expanded, shortly before it changed from the dense and hot plasma state to the universe we know today.
“This means that the dark energy in the early universe underwent a phase transition, just as water can change phase between frozen, liquid and steam. In the process, the energy bubbles eventually collided with other bubbles and along the way released energy,” said Niedermann.
“It could have lasted anything from an insanely short time—perhaps just the time it takes two particles to collide—to 300,000 years. We don’t know, but that is something we are working to find out,” added Sloth.
Do we need new physics?
So, the phase transition model is based on the fact that the universe does not behave as the Standard Model tells us. It may sound a little scientifically crazy to suggest that something is wrong with our fundamental understanding of the universe; that you can just propose the existence of hitherto unknown forces or particles to solve the Hubble tension.
“But if we trust the observations and calculations, we must accept that our current model of the universe cannot explain the data, and then we must improve the model. Not by discarding it and its success so far, but by elaborating on it and making it more detailed so that it can explain the new and better data,” said Martin S. Sloth, adding, “It appears that a phase transition in the dark energy is the missing element in the current Standard Model to explain the differing measurements of the universe’s expansion rate.”
The findings are published in the journal Physics Letters B.
More information: Florian Niedermann et al, Hot new early dark energy: Towards a unified dark sector of neutrinos, dark energy and dark matter, Physics Letters B (2022). DOI: 10.1016/j.physletb.2022.137555
Eight years ago, a patient lost her power of speech because of ALS, or Lou Gehrig’s disease, which causes progressive paralysis. She can still make sounds, but her words have become unintelligible, leaving her reliant on a writing board or iPad to communicate.
Now, after volunteering to receive a brain implant, the woman has been able to rapidly communicate phrases like “I don’t own my home” and “It’s just tough” at a rate approaching normal speech.
That is the claim in a paper published over the weekend on the website bioRxiv by a team at Stanford University. The study has not been formally reviewed by other researchers. The scientists say their volunteer, identified only as “subject T12,” smashed previous records by using the brain-reading implant to communicate at a rate of 62 words a minute, three times the previous best.
[…]
The brain-computer interfaces that Shenoy’s team works with involve a small pad of sharp electrodes embedded in a person’s motor cortex, the brain region most involved in movement. This allows researchers to record activity from a few dozen neurons at once and find patterns that reflect what motions someone is thinking of, even if the person is paralyzed.
In previous work, paralyzed volunteers have been asked to imagine making hand movements. By “decoding” their neural signals in real time, implants have let them steer a cursor around a screen, pick out letters on a virtual keyboard, play video games, or even control a robotic arm.
In the new research, the Stanford team wanted to know if neurons in the motor cortex contained useful information about speech movements, too. That is, could they detect how “subject T12” was trying to move her mouth, tongue, and vocal cords as she attempted to talk?
These are small, subtle movements, and according to Sabes, one big discovery is that just a few neurons contained enough information to let a computer program predict, with good accuracy, what words the patient was trying to say. That information was conveyed by Shenoy’s team to a computer screen, where the patient’s words appeared as they were spoken by the computer.
[…]
Shenoy’s group is part of a consortium called BrainGate that has placed electrodes into the brains of more than a dozen volunteers. They use an implant called the Utah Array, a rigid metal square with about 100 needle-like electrodes.
Some companies, including Elon Musk’s brain interface company, Neuralink, and a startup called Paradromics, say they have developed more modern interfaces that can record from thousands—even tens of thousands—of neurons at once.
While some skeptics have asked whether measuring from more neurons at one time will make any difference, the new report suggests it will, especially if the job is to brain-read complex movements such as speech.
The Stanford scientists found that the more neurons they read from at once, the fewer errors they made in understanding what “T12” was trying to say.
“This is a big deal, because it suggests efforts by companies like Neuralink to put 1,000 electrodes into the brain will make a difference, if the task is sufficiently rich,” says Sabes, who previously worked as a senior scientist at Neuralink.
[…]In a study published Monday in the journal Biosensor and Bioelectronics, a group of researchers from Tel Aviv University (via Neuroscience News) said they recently created a robot that can identify a handful of smells with 10,000 times more sensitivity than some specialized electronics. They describe their robot as a bio-hybrid platform (read: cyborg). It features a set of antennae taken from a desert locust that is connected to an electronic system that measures the amount of electrical signal produced by the antennae when they detect a smell. They paired the robot with an algorithm that learned to characterize the smells by their signal output. In this way, the team created a system that could reliably differentiate between eight “pure” odors, including geranium, lemon and marzipan, and two mixtures of different smells. The scientists say their robot could one day be used to detect drugs and explosives.
A YouTube video from Tel Aviv University claims the robot is a “scientific first,” but last June researchers from Michigan State University published research detailing a system that used surgically-altered locusts to detect cancer cells. Back in 2016, scientists also tried turning locusts into bomb-sniffing cyborgs. What can I say, after millennia of causing crop failures, the pests could finally be useful for something.
The forest carbon offsets approved by the world’s leading provider and used by Disney, Shell, Gucci and other big corporations are largely worthless and could make global heating worse, according to a new investigation.
The research into Verra, the world’s leading carbon standard for the rapidly growing $2bn (£1.6bn) voluntary offsets market, has found that, based on analysis of a significant percentage of the projects, more than 90% of their rainforest offset credits – among the most commonly used by companies – are likely to be “phantom credits” and do not represent genuine carbon reductions.
The analysis raises questions over the credits bought by a number of internationally renowned companies – some of them have labelled their products “carbon neutral”, or have told their consumers they can fly, buy new clothes or eat certain foods without making the climate crisis worse.
But doubts have been raised repeatedly over whether they are really effective.
The nine-month investigation has been undertaken by the Guardian, the German weekly Die Zeit and SourceMaterial, a non-profit investigative journalism organisation. It is based on new analysis of scientific studies of Verra’s rainforest schemes.
[…]
Verra argues that the conclusions reached by the studies are incorrect, and questions their methodology. And they point out that their work since 2009 has allowed billions of dollars to be channelled to the vital work of preserving forests.
The investigation found that:
Only a handful of Verra’s rainforest projects showed evidence of deforestation reductions, according to two studies, with further analysis indicating that 94% of the credits had no benefit to the climate.
The threat to forests had been overstated by about 400% on average for Verra projects, according to analysis of a 2022 University of Cambridge study.
Gucci, Salesforce, BHP, Shell, easyJet, Leon and the band Pearl Jam were among dozens of companies and organisations that have bought rainforest offsets approved by Verra for environmental claims.
Human rights issues are a serious concern in at least one of the offsetting projects. The Guardian visited a flagship project in Peru, and was shown videos that residents said showed their homes being cut down with chainsaws and ropes by park guards and police. They spoke of forced evictions and tensions with park authorities.
[…]
Two different groups of scientists – one internationally based, the other from Cambridge in the UK – looked at a total of about two-thirds of 87 Verra-approved active projects. A number were left out by the researchers when they felt there was not enough information available to fairly assess them.
The two studies from the international group of researchers found just eight out of 29 Verra-approved projects where further analysis was possible showed evidence of meaningful deforestation reductions.
The journalists were able to do further analysis on those projects, comparing the estimates made by the offsetting projects with the results obtained by the scientists. The analysis indicated about 94% of the credits the projects produced should not have been approved.
Credits from 21 projects had no climate benefit, seven had between 98% and 52% fewer than claimed using Verra’s system, and one had 80% more impact, the investigation found.
Separately, the study by the University of Cambridge team of 40 Verra projects found that while a number had stopped some deforestation, the areas were extremely small. Just four projects were responsible for three-quarters of the total forest that was protected.
The journalists again analysed these results more closely and found that, in 32 projects where it was possible to compare Verra’s claims with the study finding, baseline scenarios of forest loss appeared to be overstated by about 400%. Three projects in Madagascar have achieved excellent results and have a significant impact on the figures. If those projects are not included, the average inflation is about 950%.
[…]
Barbara Haya, the director of the Berkeley Carbon Trading Project, has been researching carbon credits for 20 years, hoping to find a way to make the system function. She said: “The implications of this analysis are huge. Companies are using credits to make claims of reducing emissions when most of these credits don’t represent emissions reductions at all.
“Rainforest protection credits are the most common type on the market at the moment. And it’s exploding, so these findings really matter. But these problems are not just limited to this credit type. These problems exist with nearly every kind of credit.
“One strategy to improve the market is to show what the problems are and really force the registries to tighten up their rules so that the market could be trusted. But I’m starting to give up on that. I started studying carbon offsets 20 years ago studying problems with protocols and programs. Here I am, 20 years later having the same conversation. We need an alternative process. The offset market is broken.”
Contrails — the wispy ice clouds trailing behind flying jets — “are surprisingly bad for the environment,” reports CNN: A study that looked at aviation’s contribution to climate change between 2000 and 2018 concluded that contrails create 57% of the sector’s warming impact, significantly more than the CO2 emissions from burning fuel. They do so by trapping heat that would otherwise be released into space.
And yet, the problem may have an apparently straightforward solution. Contrails — short for condensation trails, which form when water vapor condenses into ice crystals around the small particles emitted by jet engines — require cold and humid atmospheric conditions, and don’t always stay around for long. Researchers say that by targeting specific flights that have a high chance of producing contrails, and varying their flight path ever so slightly, much of the damage could be prevented.
Adam Durant, a volcanologist and entrepreneur based in the UK, is aiming to do just that. “We could, in theory, solve this problem for aviation within one or two years,” he says…. Of contrails’ climate impact, “80 or 90% is coming from only maybe five to 10% of all flights,” says Durant. “Simply redirecting a small proportion of flights can actually save the majority of the contrail climate impact….”
In 2021, scientists calculated that addressing the contrail problem would cost under $1 billion a year, but provide benefits worth more than 1,000 times as much. And a study from Imperial College London showed that diverting just 1.7% of flights could reduce the climate damage of contrails by as much as 59%.
Durant’s company Satavia is now testing its technology with two airlines and “actively looking for more airlines in 2023 to work with, as we start scaling up the service that we offer.”
Truly addressing the issue may require some changes to air traffic rules, Durant says — but he’s not the only one working on the issue. There’s also the task force of a non-profit energy think tank that includes six airlines, plus researchers and academics. “We could seriously reduce, say, 50% of the industry’s contrails impact by 2030,” Durant tells CNN. “That’s totally attainable, because we can do it with software and analytics.”
[…] While all scientific results from the LHCb collaboration are already publicly available through open access papers, the data used by the researchers to produce these results is now accessible to anyone in the world through the CERN open data portal. The data release is made in the context of CERN’s Open Science Policy, reflecting the values of transparency and international collaboration enshrined in the CERN Convention for more than 60 years.
[…]
The data sample made available amounts to 20% of the total data set collected by the LHCb experiment in 2011 and 2012 during LHC Run 1. It comprises 200 terabytes containing information obtained from proton–proton collision events filtered and recorded with the detector.
[…]
The analysis of LHC data is a complex and time-consuming exercise. Therefore, to facilitate the analysis, the samples are accompanied by extensive documentation and metadata, as well as a glossary explaining several hundred special terms used in the preprocessing. The data can be analyzed using dedicated LHCb algorithms, which are available as open source software.
Researchers have announced that they simulated two miniscule black holes in a quantum computer and transmitted a message between them through what amounted to a tunnel in space-time.
They said that based on the quantum information teleported, a traversable wormhole appeared to have emerged, but that no rupture of space and time was physically created in the experiment, according to the study published in the journal Nature on Wednesday.
[…]
Caltech physicist Maria Spiropulu, a co-author of the research, described it as having the characteristics of a “baby wormhole”, and now hopes to make “adult wormholes and toddler wormholes step-by-step”. The wormhole dynamics were observed on a quantum device at Google called the Sycamore quantum processor.
Experts who were not involved in the experiment cautioned that it was important to note that a physical wormhole had not actually been created, but noted the future possibilities.
Daniel Harlow, a physicist at MIT, told the New York Times the experiment was based on a modelling that was so simple that it could just as well have been studied using a pencil and paper.
“I’d say that this doesn’t teach us anything about quantum gravity that we didn’t already know,” Harlow wrote. “On the other hand, I think it is exciting as a technical achievement, because if we can’t even do this (and until now we couldn’t), then simulating more interesting quantum gravity theories would certainly be off the table.”
The study authors themselves made clear that scientists remain a long way from being able to send people or other living beings through such a portal.
[…]
“These ideas have been around for a long time and they’re very powerful ideas,” Lykken said. “But in the end, we’re in experimental science, and we’ve been struggling now for a very long time to find a way to explore these ideas in the laboratory. And that’s what’s really exciting about this. It’s not just, ‘Well, wormholes are cool.’ This is a way to actually look at these very fundamental problems of our universe in a laboratory setting.”
Scientists at the U.S. Department of Agriculture’s (USDA) Agricultural Research Service (ARS) recently announced that plants could be used to produce nanobodies that quickly block emerging pathogens in human medicine and agriculture. These nanobodies represent a promising new way to treat viral diseases, including SARS-CoV-2.
Nanobodies are small antibody proteins naturally produced in specific animals like camels, alpacas, and llamas.
ARS researchers turned to evaluating nanobodies to prevent and treat citrus greening disease in citrus trees. These scientists are now using their newly developed and patented SymbiontTM technology to show that nanobodies can be easily produced in a plant system with broad agricultural and public health applications.
As a proof-of-concept, researches showed that nanobodies targeting the SARS-CoV-2 virus could be made in plant cells and remain functional in blocking the binding of the SARS-CoV-2 spike protein to its receptor protein: the process responsible for initiating viral infection in human cells.
“We initially wanted to develop sustainable solutions to pathogens in crop production,” said ARS researcher Robert Shatters, Jr. “The results of that research are indeed successful and beneficial for the nation’s agricultural system. But now we are aware of an even greater result—the benefits of producing therapeutics in plants now justify the consideration of using plants to mass produce COVID-19 protein-based therapies.”
AgroSource, Inc. collaborated with USDA-ARS to develop the plant-based production system. They are currently taking the necessary steps to see how they can move this advancement into the commercial sector.
“This is a huge breakthrough for science and innovative solutions to agricultural and public health challenges,” said ARS researcher Michelle Heck. “This cost-efficient, plant-based system proves that there are alternative ways to confront and prevent the spread of emerging pathogens. The approach has the potential to massively expand livelihood development opportunities in rural agricultural areas of the nation and in other countries.”
The findings are published on the bioRxiv preprint server.
More information: Marco Pitino et al, Plant production of high affinity nanobodies that block SARS-CoV-2 spike protein binding with its receptor, human angiotensin converting enzyme, bioRxiv (2022). DOI: 10.1101/2022.09.03.506425
For the past 50 years, scientists around the world have debated why lightning zig-zags and how it is connected to the thunder cloud above.
There hasn’t been a definitive explanation until now, with a University of South Australia plasma physicist publishing a landmark paper that solves both mysteries.
[…]
The answer? Singlet-delta metastable oxygen molecules.
Basically, lightning happens when electrons hit oxygen molecules with enough energy to create high energy singlet delta oxygen molecules. After colliding with the molecules, the “detached” electrons form a highly conducting step—initially luminous—that redistributes the electric field, causing successive steps.
The conducting column connecting the step to the cloud remains dark when electrons attach to neutral oxygen molecules, followed by immediate detachment of the electrons by singlet delta molecules.
[…]
he paper, “Toward a theory of stepped leaders in lightning” is published in the Journal of Physics D: Applied Physics. It is authored by Dr. John Lowke and Dr. Endre Szili from the Future Industries Institute at the University of South Australia.
More information: John J Lowke et al, Toward a theory of “stepped-leaders” of lightning, Journal of Physics D: Applied Physics (2022). DOI: 10.1088/1361-6463/aca103
UCSF researchers use ISRIB to block the molecular stress response in order to restore cognitive function.
ISRIB, a tiny molecule identified by University of California, San Francisco (UCSF) researchers can repair the neural and cognitive effects of concussion in mice weeks after the damage, according to a new study.
ISRIB blocks the integrated stress response (ISR), a quality control process for protein production that, when activated chronically, can be harmful to cells.
The study, which was recently published in the Proceedings of the National Academy of Sciences, discovered that ISRIB reverses the effects of traumatic brain injury (TBI) on dendritic spines, an area of neurons vital to cognition. The drug-treated mice also showed sustained improvements in working memory.
“Our goal was to see if ISRIB could ameliorate the neural effects of concussion,” said Michael Stryker, Ph.D., a co-senior author of the study and professor of physiology at UCSF. “We were pleased to find the drug was tremendously successful in normalizing neuronal and cognitive function with lasting effects.”
TBI is a leading cause of long-term neurological disability, with patients’ quality of life suffering as a result of difficulties in concentration and memory. It’s also the strongest environmental risk factor for dementia — even a minor concussion boosts an individual’s risk dramatically.
[…]
Using advanced imaging techniques, Frias observed the effects of TBI on dendritic spines, the primary site of excitatory communication between neurons, over the course of multiple days.
In healthy conditions, neurons show a fairly consistent rate of spine formation, maturation, and elimination – dynamics that support learning and memory. But after a single mild concussion, mouse cortical neurons showed a massive burst of newly formed spines and continued to make excessive spines for as long as they were measured.
“Some may find this counterintuitive at first, assuming more dendritic spines would be a good thing for making new memories,” said co-senior author Susanna Rosi, PhD, a professor of physical therapy and neurological surgery at UCSF at the time of the study, now also at Altos Labs. “But in actuality, having all too many new spines is like being in a noisy room – when too many people are talking, you can’t hear the information you need.”
These new spines didn’t stick around for very long, however, and most were removed within days, meaning they hadn’t formed lasting functional synaptic connections.
These aberrant dynamics were rapidly reversed once mice were treated with ISRIB. By blocking the ISR, the drug was able to repair the neuronal structural changes resulting from the brain injury and restore normal rates of spine dynamics. These neuronal structural alterations were also associated with an improvement in performance to normal levels in a behavioral assay of working memory, which persisted for over a month after the final treatment.
“A month in a mouse is several years in a human, so to be able to reverse the effects of concussion in such a lasting way is really exciting,” said Frias.
Dr. Jiang, an assistant professor in the UBC faculty of forestry and the Canada Research Chair in Sustainable Functional Biomaterials, started developing a “biofoam” many years ago both to find new uses for wood waste and reduce pollution from packaging foam.
“Styrofoam waste fills up to 30 percent of global landfills and can take more than 500 years to break down. Our biofoam breaks down in the soil in a couple of weeks, requires little heat and few chemicals to make, and can be used as substitute for packaging foams, packing peanuts and even thermal insulation boards,” says Dr. Jiang.
[…]
“Our Nation was trying to create a new economy out of what was left of our forest after the wildfires and the damage caused by the mountain pine beetle epidemic in the 1990s and early 2000s. The amount of timber available for harvest in the next 20 to 60 years was significantly reduced. I have often asked why, when trees are harvested, up to 50 percent of the tree is left behind to just burn. As a Nation, we were also concerned about the losses in habitat, water quality, decline in moose and salmon populations and the acceleration in climate change,”
[…]
“A unique feature of this project is that the intellectual property is shared between UBC and First Nations,”
[…]An important clinical trial is now underway in the UK. The study is the first to transfuse red blood cells grown in the lab from donated stem cells into humans. Should this research pay off, these blood cells would be incredibly valuable for people with rare blood types, though they wouldn’t replace the need for traditional blood donation.
The RESTORE trial, as it’s known, is being conducted by scientists from the UK’s National Health Services and various universities. At least 10 healthy volunteers are expected to be enrolled in the study. All of them will receive two mini-transfusions, spaced four months apart and in random order, of the lab-grown blood cells and standard cells, both of which are derived from the same donor. As of early Monday, two participants have already gotten the lab-grown blood cells and so far appear to have experienced no side-effects.
The first-of-its-kind experiment is a Phase I trial, meaning that it’s primarily designed to test the safety of a novel or experimental treatment. But the lab-grown cells are theoretically fresher than the mix of newer and older blood cells taken from a typical blood donation (on average, red blood cells live for about 120 days). So the researchers are hoping that the lab-grown cells survive longer than the standard cells in their recipients.
“If our trial, the first such in the world, is successful, it will mean that patients who currently require regular long-term blood transfusions will need fewer transfusions in [the] future, helping transform their care,” said chief researcher Cedric Ghevaert, a hematologist and a professor in transfusion medicine at the University of Cambridge, in a statement released by the NHS.
[…]
Should this project turn out to be a success, lab grown blood cells still won’t replace the donated supply anytime soon. The team’s process is much less efficient than what the human body can do. Currently, for instance, they need about 24 liters of their nutrient solution to filter out one to two tablespoons of red blood cells. Meanwhile, about 45% of our blood is composed of red blood cells.
Even if mass-produced lab-grown blood cells are a far off possibility, they may still be able to help many people in the near future. This technology could one day provide a more reliable and longer-lasting supply of blood cells to people who have a rare mix of blood types or who have developed conditions that make it difficult to receive standard transfusions, such as sickle cell disease.