WHO Endorses ‘Breakthrough’ Childhood Vaccine For Malaria

The fight against malaria, one of the world’s worst diseases for decades, is likely to get much easier as the World Health Organization has endorsed the wide use of a malaria vaccine developed by GlaxoSmithKline, the first ever to win such approval. The vaccine will be recommended for children in sub-Saharan Africa and other high-risk areas as a four-dose schedule starting at age 5 months.

[…]

“This is a historic moment. The long-awaited malaria vaccine for children is a breakthrough for science, child health and malaria control,” said WHO Director-General Tedros Adhanom Ghebreyesus in a statement announcing their endorsement of the vaccine. “Using this vaccine on top of existing tools to prevent malaria could save tens of thousands of young lives each year.”

Despite the good news, GlaxoSmithKline’s vaccine, which is currently code-named RTS,S/AS01 but will be branded as Mosquirix, is only modestly effective. In the clinical trials evaluated for WHO approval, it was found to prevent around half of severe cases caused by P. falciparum malaria, compared to the control group. But this level of efficacy was only seen in the first year of vaccination, and by the fourth year, protection had waned to very low levels. At roughly 55% efficacy, the vaccine meets the bare minimum for WHO endorsement.

A major study this year did find that a combination of the vaccine and anti-malarial drugs can further reduce the risk of severe disease and death by 70%, a much more appealing target for public health programs. But even as is, one study has projected that the vaccine would prevent millions of cases and over 20,000 deaths annually in sub-Saharan Africa if deployed widely.

Like other vaccines before it, Mosquirix may also represent the first step toward more effective vaccines in the future. There are several other candidates in development already, including one from Moderna that’s relying on the same mRNA platform as the company’s successful covid-19 vaccine.

Source: WHO Endorses ‘Breakthrough’ Childhood Vaccine For Malaria

CRISPR Gene-Editing Experiment using direct injection Partly Restores Vision In Legally Blind Patients

Carlene Knight’s vision was so bad that she couldn’t even maneuver around the call center where she works using her cane. But that’s changed as a result of volunteering for a landmark medical experiment. Her vision has improved enough for her to make out doorways, navigate hallways, spot objects and even see colors. Knight is one of seven patients with a rare eye disease who volunteered to let doctors modify their DNA by injecting the revolutionary gene-editing tool CRISPR directly into cells that are still in their bodies. Knight and [another volunteer in the experiment, Michael Kalberer] gave NPR exclusive interviews about their experience. This is the first time researchers worked with CRISPR this way. Earlier experiments had removed cells from patients’ bodies, edited them in the lab and then infused the modified cells back into the patients. […]

CRISPR is already showing promise for treating devastating blood disorders such as sickle cell disease and beta thalassemia. And doctors are trying to use it to treat cancer. But those experiments involve taking cells out of the body, editing them in the lab, and then infusing them back into patients. That’s impossible for diseases like [Leber congenital amaurosis, or LCA], because cells from the retina can’t be removed and then put back into the eye. So doctors genetically modified a harmless virus to ferry the CRISPR gene editor and infused billions of the modified viruses into the retinas of Knight’s left eye and Kalberer’s right eye, as well as one eye of five other patients. The procedure was done on only one eye just in case something went wrong. The doctors hope to treat the patients’ other eye after the research is complete. Once the CRISPR was inside the cells of the retinas, the hope was that it would cut out the genetic mutation causing the disease, restoring vision by reactivating the dormant cells.

The procedure didn’t work for all of the patients, who have been followed for between three and nine months. The reasons it didn’t work might have been because their dose was too low or perhaps because their vision was too damaged. But Kalberer, who got the lowest dose, and one volunteer who got a higher dose, began reporting improvement starting at about four to six weeks after the procedure. Knight and one other patient who received a higher dose improved enough to show improvement on a battery of tests that included navigating a maze. For two others, it’s too soon to tell. None of the patients have regained normal vision — far from it. But the improvements are already making a difference to patients, the researchers say. And no significant side effects have occurred. Many more patients will have to be treated and followed for much longer to make sure the treatment is safe and know just how much this might be helping.

Source: CRISPR Gene-Editing Experiment Partly Restores Vision In Legally Blind Patients – Slashdot

Scientists develop the next generation of reservoir computing

A relatively new type of computing that mimics the way the human brain works was already transforming how scientists could tackle some of the most difficult information processing problems.

Now, researchers have found a way to make what is called reservoir computing work between 33 and a million times faster, with significantly fewer computing resources and less needed.

In fact, in one test of this next-generation reservoir computing, researchers solved a complex computing problem in less than a second on a .

Using the now current state-of-the-art technology, the same problem requires a supercomputer to solve and still takes much longer, said Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University.

[…]

Reservoir computing is a machine learning algorithm developed in the early 2000s and used to solve the “hardest of the hard” computing problems, such as forecasting the evolution of dynamical systems that change over time, Gauthier said.

Dynamical systems, like the weather, are difficult to predict because just one small change in one condition can have massive effects down the line, he said.

One famous example is the “butterfly effect,” in which—in one metaphorical illustration—changes created by a butterfly flapping its wings can eventually influence the weather weeks later.

Previous research has shown that reservoir computing is well-suited for learning dynamical systems and can provide accurate forecasts about how they will behave in the future, Gauthier said.

It does that through the use of an artificial neural network, somewhat like a human brain. Scientists feed data on a dynamical network into a “reservoir” of randomly connected artificial neurons in a network. The network produces useful output that the scientists can interpret and feed back into the network, building a more and more accurate forecast of how the system will evolve in the future.

The larger and more complex the system and the more accurate that the scientists want the forecast to be, the bigger the of artificial neurons has to be and the more computing resources and time that are needed to complete the task.

One issue has been that the reservoir of artificial neurons is a “black box,” Gauthier said, and scientists have not known exactly what goes on inside of it—they only know it works.

The artificial neural networks at the heart of reservoir computing are built on mathematics, Gauthier explained.

“We had mathematicians look at these networks and ask, ‘To what extent are all these pieces in the machinery really needed?'” he said.

In this study, Gauthier and his colleagues investigated that question and found that the whole reservoir computing system could be greatly simplified, dramatically reducing the need for computing resources and saving significant time.

They tested their concept on a forecasting task involving a weather system developed by Edward Lorenz, whose work led to our understanding of the .

Their next-generation reservoir computing was a clear winner over today’s state—of-the-art on this Lorenz forecasting task. In one relatively simple simulation done on a desktop computer, the new system was 33 to 163 times faster than the current model.

But when the aim was for great accuracy in the forecast, the next-generation reservoir computing was about 1 million times faster. And the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons, compared to the 4,000 needed by the current-generation model, Gauthier said.

An important reason for the speed-up is that the “brain” behind this next generation of reservoir computing needs a lot less warmup and training compared to the current generation to produce the same results.

Warmup is training data that needs to be added as input into the reservoir computer to prepare it for its actual task.

[…]

“Currently, scientists have to put in 1,000 or 10,000 data points or more to warm it up. And that’s all data that is lost, that is not needed for the actual work. We only have to put in one or two or three data points,” he said.

[…]

In their test of the Lorenz forecasting task, the researchers could get the same results using 400 data points as the current generation produced using 5,000 or more, depending on the accuracy desired.

[…]

A reservoir computing system for temporal data classification and forecasting

More information: Next generation reservoir computing, Nature Communications (2021). DOI: 10.1038/s41467-021-25801-2

Source: Scientists develop the next generation of reservoir computing

Physicists make square droplets and liquid lattices

When two substances are brought together, they will eventually settle into a steady state called thermodynamic equilibrium; examples include oil floating on top of water and milk mixing uniformly into coffee. Researchers at Aalto University in Finland wanted to disrupt this sort of state to see what happens—and whether they can control the outcome.

[…]

In their work, the team used combinations of oils with different dielectric constants and conductivities. They then subjected the liquids to an .

“When we turn on an electric field over the mixture, accumulates at the interface between the oils. This shears the interface out of thermodynamic equilibrium and into interesting formations,” explains Dr. Nikos Kyriakopoulos, one of the authors of the paper. As well as being disrupted by the electric field, the liquids were confined into a thin, nearly two-dimensional sheet. This combination led to the oils reshaping into various completely unexpected droplets and patterns.

The droplets in the experiment could be made into squares and hexagons with straight sides, which is almost impossible in nature, where small bubbles and droplets tend to form spheres. The two liquids could be also made to form into interconnected lattices: grid patterns that occur regularly in solid materials but are unheard of in mixtures. The liquids can even be coaxed into forming a torus, a donut shape, which was stable and held its shape while the field was applied—unlike in nature, as liquids have a strong tendency to collapse in and fill the hole at the center. The liquids can also form filaments that roll and rotate around an axis.

[…]

The research was carried out at the Department of Applied Physics in the Active Matter research group, led by Professor Timonen. The paper “Diversity of non- patterns and emergence of activity in confined electrohydrodynamically driven liquids” is published open-access in Science Advances.


Explore further

Effective temperatures connect equilibrium and nonequilibrium systems


More information: Diversity of non-equilibrium patterns and emergence of activity in confined electrohydrodynamically driven liquids, Science Advances (2021). DOI: 10.1126/sciadv.abh1642

Source: Physicists make square droplets and liquid lattices

Masks Lead to Less Covid-19, Massive Study Finds

An enormous randomized trial of communities in Bangladesh seems to provide the clearest evidence yet that regular mask-wearing can impede the spread of the covid-19 pandemic. The study found that villages where masks were highly promoted and became more popular experienced noticeably lower rates of covid-like symptoms and confirmed past infections than villages where mask-wearing remained low. These improvements were even more pronounced for villages given free surgical masks over cloth masks.

Plenty of data has emerged over the last year and a half to support the use of masks during the covid-19 pandemic, both in the real world and in the lab. But it’s less clear exactly how much of a benefit these masks can provide wearers (and their communities), and there are at least some studies that have been inconclusive in showing a noticeable benefit.

[…]

Last late year, however, dozens of scientists teamed up with public health advocacy organizations and the Bangladesh government to conduct a massive randomized trial of masks—often seen as the gold standard of evidence. And on Wednesday, they released the results of their research in a working paper through the research nonprofit Innovations for Poverty Action.

The study involved 600 villages in a single region of the country with over 350,000 adult residents combined. Similarly matched villages were randomly assigned to two conditions (a pair of villages with similar population density, for instance, would go to one condition or the other). In one condition, the researchers and their partners promoted the use of masks through various incentives between November 2020 and January 2021. These incentives included free masks, endorsements by local leaders, and sometimes financial prizes for villages that achieved widespread mask usage. In two-thirds of the intervention villages, the free masks given were surgical, while one-third were given free cloth masks. In the second condition, the researchers simply observed the villages and did nothing to encourage masks during that time.

Residents in the villages where masks were encouraged did start wearing them more, though no individual nudge or incentive seemed to do better than the others. By the end, about 42% of residents in these villages wore masks regularly, compared to 13% of those in the control group. And in these communities, the odds of people reporting symptoms that may have been covid or testing positive for antibodies to the virus declined.

Overall, the average proportion of people who reported symptoms in the weeks following the mask promotions went down by 11% in these villages compared to the control group, and the average number of people having antibodies went down by over 9%. These differences were larger for surgical mask-wearing villages (12% vs 5% for reducing symptoms) and for residents over 60 (35% for reducing infections for older residents in surgical mask-wearing villages).

Some of this effect might not have come directly from the ability of masks to block transmission of the virus. Those who used masks, the study found, were also more likely to practice social distancing. That’s a relevant finding, the authors note, since some people who have argued against mask mandates do so by claiming that masks will only make people act more carelessly. This study suggests that the opposite is true—that masks make us more, not less, conscientious of others.

[…]

Source: Masks Lead to Less Covid-19, Massive Study Finds

Reddit offers a non-response to uproar over COVID-19 misinformation

Reddit has finally cracked down on COVID-19 misinformation following growing calls to act, although it probably won’t satisfy many of its critics. The social site has banned r/NoNewNormal and quarantined 54 other COVID-19 denial subreddits, but not over the false claims themselves. Instead, it’s for abuse — NoNewNormal was caught brigading en masse (that is, flooding other subreddits) despite warnings, while the other communities violated a rule forbidding harassment and bullying.

The company didn’t, however, relent on its approach to tackling the misinformation itself. Reddit said it clamps down on posts that encourage a “significant risk of physical harm” or are manipulations intended to mislead others, but made no mention of purging posts or subreddits merely for making demonstrably false claims about COVID-19 or vaccines.

Reddit previously defended its position by arguing its platform was meant to foster “open and authentic” conversations, even if they disagree with a widely established consensus. However, that stance hasn’t satisfied many of Reddit’s users. Business Insider noted 135 subreddits went “dark” (that is, went private) in protest over Reddit’s seeming tolerance of COVID-19 misinformation, including major communities like r/TIFU.

Critics among those groups contended that Reddit let these groups blossom through “inaction and malice,” and that Reddit wasn’t consistent in enforcing its own policies on misinformation and abuse. As one redditor pointed out, Reddit’s claims about allowing dissenting ideas don’t carry much weight — the COVID-19 denial groups are presenting false statements, not just contrary opinions.

[…]

Source: Reddit offers a non-response to uproar over COVID-19 misinformation | Engadget

The New Science of Clocks Prompts Questions About the Nature of Time, finds limits for their accuracy

“It occurred to us that actually a clock is a thermal machine,”[…]Like an engine, a clock harnesses the flow of energy to do work, producing exhaust in the process. Engines use energy to propel; clocks use it to tick.

Over the past five years, through studies of the simplest conceivable clocks, the researchers have discovered the fundamental limits of timekeeping. They’ve mapped out new relationships between accuracy, information, complexity, energy and entropy — the quantity whose incessant rise in the universe is closely associated with the arrow of time.

These relationships were purely theoretical until this spring, when the experimental physicist Natalia Ares and her team at the University of Oxford reported measurements of a nanoscale clock that strongly support the new thermodynamic theory.

[…]

The first thing to note is that pretty much everything is a clock. Garbage announces the days with its worsening smell. Wrinkles mark the years. “You could tell time by measuring how cold your coffee has gotten on your coffee table,”

[…]

Huber, Erker and their colleagues realized that a clock is anything that undergoes irreversible changes: changes in which energy spreads out among more particles or into a broader area. Energy tends to dissipate — and entropy, a measure of its dissipation, tends to increase — simply because there are far, far more ways for energy to be spread out than for it to be highly concentrated. This numerical asymmetry, and the curious fact that energy started out ultra-concentrated at the beginning of the universe, are why energy now moves toward increasingly dispersed arrangements, one cooling coffee cup at a time.

Not only do energy’s strong spreading tendency and entropy’s resulting irreversible rise seem to account for time’s arrow, but according to Huber and company, it also accounts for clocks. “The irreversibility is really fundamental,” Huber said. “This shift in perspective is what we wanted to explore.”

Coffee doesn’t make a great clock. As with most irreversible processes, its interactions with the surrounding air happen stochastically. This means you have to average over long stretches of time, encompassing many random collisions between coffee and air molecules, in order to accurately estimate a time interval. This is why we don’t refer to coffee, or garbage or wrinkles, as clocks.

We reserve that name, the clock thermodynamicists realized, for objects whose timekeeping ability is enhanced by periodicity: some mechanism that spaces out the intervals between the moments when irreversible processes occur. A good clock doesn’t just change. It ticks.

The more regular the ticks, the more accurate the clock. In their first paper, published in Physical Review X in 2017, Erker, Huber and co-authors showed that better timekeeping comes at a cost: The greater a clock’s accuracy, the more energy it dissipates and the more entropy it produces in the course of ticking.

“A clock is a flow meter for entropy,” said Milburn.

They found that an ideal clock — one that ticks with perfect periodicity — would burn an infinite amount of energy and produce infinite entropy, which isn’t possible. Thus, the accuracy of clocks is fundamentally limited.

Indeed, in their paper, Erker and company studied the accuracy of the simplest clock they could think of: a quantum system consisting of three atoms. A “hot” atom connects to a heat source, a “cold” atom couples to the surrounding environment, and a third atom that’s linked to both of the others “ticks” by undergoing excitations and decays. Energy enters the system from the heat source, driving the ticks, and entropy is produced when waste energy gets released into the environment.

Samuel Velasco/Quanta Magazine

The researchers calculated that the ticks of this three-atom clock become more regular the more entropy the clock produces. This relationship between clock accuracy and entropy “intuitively made sense to us,” Huber said, in light of the known connection between entropy and information.

In precise terms, entropy is a measure of the number of possible arrangements that a system of particles can be in. These possibilities grow when energy is spread more evenly among more particles, which is why entropy rises as energy disperses. Moreover, in his 1948 paper that founded information theory, the American mathematician Claude Shannon showed that entropy also inversely tracks with information: The less information you have about, say, a data set, the higher its entropy, since there are more possible states the data can be in.

“There’s this deep connection between entropy and information,” Huber said, and so any limit on a clock’s entropy production should naturally correspond to a limit of information — including, he said, “information about the time that has passed.”

In another paper published in Physical Review X earlier this year, the theorists expanded on their three-atom clock model by adding complexity — essentially extra hot and cold atoms connected to the ticking atom. They showed that this additional complexity enables a clock to concentrate the probability of a tick happening into narrower and narrower windows of time, thereby increasing the regularity and accuracy of the clock.

In short, it’s the irreversible rise of entropy that makes timekeeping possible, while both periodicity and complexity enhance clock performance. But until 2019, it wasn’t clear how to verify the team’s equations, or what, if anything, simple quantum clocks had to do with the ones on our walls.

[…]

The vibrating membrane isn’t a quantum system, but it’s small and simple enough to allow precise tracking of its motion and energy use. “We can tell from the energy dissipation in the circuit itself how much the entropy changes,” Ares said.

She and her team set out to test the key prediction from Erker and company’s 2017 paper: That there should be a linear relationship between entropy production and accuracy. It was unclear whether the relationship would hold for a larger, classical clock, like the vibrating membrane. But when the data rolled in, “we saw the first plots [and] we thought, wow, there is this linear relationship,” Huber said.

The regularity of the membrane clock’s vibrations directly tracked with how much energy entered the system and how much entropy it produced. The findings suggest that the thermodynamic equations the theorists derived may hold universally for timekeeping devices.

[…]

One major aspect of the mystery of time is the fact that it doesn’t play the same role in quantum mechanics as other quantities, like position or momentum; physicists say there are no “time observables” — no exact, intrinsic time stamps on quantum particles that can be read off by measurements. Instead, time is a smoothly varying parameter in the equations of quantum mechanics, a reference against which to gauge the evolution of other observables.

Physicists have struggled to understand how the time of quantum mechanics can be reconciled with the notion of time as the fourth dimension in Einstein’s general theory of relativity, the current description of gravity. Modern attempts to reconcile quantum mechanics and general relativity often treat the four-dimensional space-time fabric of Einstein’s theory as emergent, a kind of hologram cooked up by more abstract quantum information. If so, both time and space ought to be approximate concepts.

The clock studies are suggestive, in showing that time can only ever be measured imperfectly. The “big question,” said Huber, is whether the fundamental limit on the accuracy of clocks reflects a fundamental limit on the smooth flow of time itself — in other words, whether stochastic events like collisions of coffee and air molecules are what time ultimately is.

[…]

Source: The New Science of Clocks Prompts Questions About the Nature of Time | Quanta Magazine

CDC shares 8 new charts that show how powerful Pfizer’s vaccine is against COVID-19 and the Delta variant

Pfizer’s COVID-19 vaccine is now not only approved for everyone over 16 years old, it’s recommended.

On Monday, an independent advisory committee to the Centers for Disease Control and Prevention voted unanimously to support recommending the vaccine.

The decision of those 14 experts was based on overwhelming evidence that Pfizer’s 2-shot immunization, named Comirnaty, which was fully approved by the Food and Drug Administration last week, is not only safe but also works very well at preventing disease.

The independent experts on the CDC panel cheered on the creation of the COVID-19 vaccines in the midst of a pandemic, calling it a “miraculous accomplishment” and “a moment of incredible scientific innovation.”

Here are eight charts and graphs that lay out why Pfizer’s vaccine was given a big thumbs up:

COVID-19 vaccines are doing a great job keeping people healthy, alive, and out of the hospital.

chart showing vaccinated and unvaccinated hospitalization rates (with vaccinated near zero, and unvaccinated yo-yoing up and down, but staying consistently much higher than vaccinated rates)
Centers for Disease Control and Prevention ACIP meeting Aug. 30, 2021 https://www.cdc.gov/vaccines/acip/meetings/slides-2021-08-30.html

The CDC committee looked at data from across the US showing unvaccinated adults are being hospitalized for COVID-19 at rates roughly 16 times higher than the vaccinated.

As of August 23, 0.006% of vaccinated Americans (fewer than 9,000 people) have had a severe enough case of COVID-19 to be hospitalized, according to CDC data.

The number of vaccinated people who’ve died from COVID-19 is even smaller. Of the 636,015 American COVID-19 deaths, just 2,063, or 0.3% have been in vaccinated people, a tiny fraction when you consider that more than 174 million people are fully vaccinated in the US.

Unvaccinated people under age 50 are getting hospitalized at especially high rates this year.

graphs show breakdown by age of hospitalization rates in unvaccinated (high) versus vaccinated (near zero)
Centers for Disease Control and Prevention ACIP meeting Aug. 30, 2021

The CDC tracks these rates of COVID-19 hospitalizations through COVID-NET, a system which collects data from 250 hospitals across 14 states (located in different areas of the country) every week.

It’s true that more vaccinated people are now catching COVID-19, due to the Delta variant. But their cases are generally mild and the vaccines are still preventing severe disease well.

[…]

Source: CDC shares 8 new charts that show how powerful Pfizer’s vaccine is against COVID-19 and the Delta variant

Reddit’s turns anti-vaxx says it’s teaching a controvesy

Over 135 subreddits have gone dark this week in protest of Reddit’s refusal to ban communities that spread misinformation about the COVID pandemic and vaccines.

Subreddits that went private include two with 10 million or more subscribers, namely r/Futurology and r/TIFU. The PokemonGo community is one of 15 other subreddits with at least 1 million subscribers that went private; another 15 subreddits with at least 500,000 subscribers also went private. They’re all listed in a post on r/VaxxHappened which has been coordinating opposition to Reddit management’s stance on pandemic misinformation. More subreddits are being added as they join the protest.

“Futurology has gone private to protest Reddit’s inaction on COVID-19 misinformation,” a message on that subreddit says. “Reddit won’t enforce their policies against misinformation, brigading, and spamming. Misinformation subreddits such as NoNewNormal and r/conspiracy must be shut down. People are dying from misinformation.”

[…]

Last week, the moderators of over 450 subreddits joined an open letter urging Reddit to “take action against the rampant Coronavirus misinformation on their website,” saying that subreddits existing “solely to spread medical disinformation and undermine efforts to combat the global pandemic should be banned.”

Reddit published a response defending its stance, saying it will continue to allow “debate” and “dissent” on vaccines and other COVID-related matters, even when it “challenge[s] consensus views.”

“We appreciate that not everyone agrees with the current approach to getting us all through the pandemic, and some are still wary of vaccinations. Dissent is a part of Reddit and the foundation of democracy,” the company said.

Reddit does draw a line somewhere, as it said it will continue to take action against communities “dedicated to fraud (e.g. fake vaccine cards) or encouraging harm (e.g. consuming bleach).” But in general, Reddit said, “we believe it is best to enable communities to engage in debate and dissent, and for us to link to the CDC wherever appropriate.”

[…]

Source: Reddit’s teach-the-controversy stance on COVID vaccines sparks wider protest | Ars Technica

Encouraging anti-vaxxers would definitely fall under the category of encouraging harm.

LED streetlights contribute to insect population declines

Streetlights—particularly those that use white light-emitting diodes (LEDs)—not only disrupt insect behavior but are also a culprit behind their declining numbers, a new study carried out in southern England showed Wednesday.

Artificial lights at night had been identified as a possible factor behind falling around the world, but the topic had been under-researched.

To address the question, scientists compared 26 roadside sites consisting of either hedgerows or grass verges that were lit by streetlights, against an equal number of nearly identical sites that were unlit.

They also examined a site with one unlit and two lit sections, all of which were similar in their vegetation.

The team chose moth caterpillars as a proxy for nocturnal insects more broadly, because they remain within a few meters of where they hatched during the larval stage of their lives, before they acquire the ability to fly.

The team either struck the hedges with sticks so that the caterpillars fell out, or swept the grass with nets to pick them up.

The results were eye-opening, with a 47 percent reduction in insect population at the hedgerow sites and 37 percent reduction at the roadside .

[…]

The lighting also disturbed their feeding behavior: when the team weighed the caterpillars, they found that those in the lighted areas were heavier.

[…]

The team found that the disruption was most pronounced in areas lit by LED lights as opposed to high-pressure sodium (HPS) lamps or older low-pressure sodium (LPS) lamps, both of which produce a yellow-orange glow that is less like sunlight.

[…]

there are really quite accessible solutions,” said Boyes—like applying filters to change the lamps’ color, or adding shields so that the light shines only on the road, not insect habitats.

Source: LED streetlights contribute to insect population declines: study

Increase in Earth’s energy imbalance is proof that climate change is man made

The observed trend in Earth’s energy imbalance (TEEI), a measure of the acceleration of heat uptake by the planet, is a fundamental indicator of perturbations to climate. Satellite observations (2001–2020) reveal a significant positive globally-averaged TEEI of 0.38 ± 0.24 Wm−2decade−1, but the contributing drivers have yet to be understood. Using climate model simulations, we show that it is exceptionally unlikely (<1% probability) that this trend can be explained by internal variability. Instead, TEEI is achieved only upon accounting for the increase in anthropogenic radiative forcing and the associated climate response. TEEI is driven by a large decrease in reflected solar radiation and a small increase in emitted infrared radiation. This is because recent changes in forcing and feedbacks are additive in the solar spectrum, while being nearly offset by each other in the infrared. We conclude that the satellite record provides clear evidence of a human-influenced climate system.

[…]

Source: Anthropogenic forcing and response yield observed positive trend in Earth’s energy imbalance | Nature Communications

Cloud seeding in UAE: Artificial rain with drones, electricity

the UAE is now testing a new method that has drones fly into clouds to give them an electric shock to trigger rain production, the BBC and CNN have previously reported.

The project is getting renewed interest after the UAE’s National Center of Meteorology recently published a series of videos on Instagram of heavy rain in parts of the country. Water gushed past trees, and cars drove on rain-soaked roads. The videos were accompanied by radar images of clouds tagged “#cloudseeding.”

The Independent reports recent rain is part of the drone cloud seeding project.

[…]

The UAE oversaw more than 200 cloud seeding operations in the first half of 2020, successfully creating excess rainfall, the National News reported.

There have been successes in the U.S., as well as China, India, and Thailand. Long-term cloud seeding in the mountains of Nevada have increased snowpack by 10% or more each year, according to research published by the American Meteorological Society. A 10-year cloud seeding experiment in Wyoming resulted in 5-10% increases in snowpack, according to the State of Wyoming.

[…]

Source: Cloud seeding in UAE: Artificial rain with drones, electricity

Normal Touchscreens Can Also Detect Contaminated Water

We take for granted that the water coming out of the kitchen faucet is safe to drink, but that’s not always the case in other parts of the world. So researchers at the University of Cambridge are developing a new approach to testing for contaminants using a device that billions of people already use every day.

Modern capacitive touchscreens (the kind that can easily detect the subtlest finger taps instead of requiring users to press hard on the screen) feature an invisible grid of electrodes that carry a very small electrical charge. When your conductive finger touches the screen it changes the charge level at a specific location that the smartphone can detect based on grid coordinates. That’s a grossly simplified crash course on how the technology powering modern touchscreens work, but what’s important is their use of a changing electrical charge.

In a recently published paper, the University of Cambridge researchers explain how a stripped-down touchscreen—the same hardware used in smartphones and tablets—was found to be able to detect the electrically charged ions in an electrolyte. Different liquids were piped onto the surface of the touchscreen and using the standard software that’s used to test these screens, the researchers were able to differentiate the samples based on how “the fluids all interact with the screen’s electric fields differently depending on the concentration of ions and their charge.”

The touchscreens used in mobile devices are tuned and calibrated to best respond to interactions with fingers, but the researchers believe that by altering the design of the electrodes, even in just a small area of the screen (a custom app could indicate exactly where a sample needs to be placed) the sensitivity could be optimized for detecting contaminants in samples like soil and water.

[…]

Source: Normal Touchscreens Can Also Detect Contaminated Water

Use of PFAS in cosmetics ‘widespread,’ new study finds – not a good thing

According to the study, 56% of foundations and eye products, 48% of lip products and 47% of mascaras tested were found to contain high levels of fluorine, which is an indicator of PFAS use in the product. . Credit: University of Notre Dame

Many cosmetics sold in the United States and Canada likely contain high levels of per- and polyfluoroalkyl substances (PFAS), a potentially toxic class of chemicals linked to a number of serious health conditions, according to new research from the University of Notre Dame.

Scientists tested more than 200 cosmetics including concealers, foundations, eye and eyebrow products and various lip products. According to the study, 56 percent of foundations and eye products, 48 percent of lip products and 47 percent of mascaras tested were found to contain high levels of fluorine, which is an indicator of PFAS use in the product. The study was recently published in the journal of Environmental Science and Technology Letters.

“These results are particularly concerning when you consider the risk of exposure to the consumer combined with the size and scale of a multibillion-dollar industry that provides these products to millions of consumers daily,” Graham Peaslee, professor of physics at Notre Dame and principal investigator of the study, said. “There’s the individual risk—these are products that are applied around the eyes and mouth with the potential for absorption through the skin or at the tear duct, as well as possible inhalation or ingestion. PFAS is a persistent —when it gets into the bloodstream, it stays there and accumulates. There’s also the additional risk of environmental contamination associated with the manufacture and disposal of these products, which could affect many more people.”

Previously found in nonstick cookware, treated fabrics, fast food wrappers and, most recently, the used by firefighters across the country, PFAS are known as “forever chemicals,” because the chemical compounds don’t naturally degrade—which means they end up contaminating groundwater for decades after their release into the environment. Use of PFAS in foam fire suppressants has been linked to contaminated drinking , prompting the Department of Defense to switch to environmentally safer alternatives, for example.

Studies have linked certain PFAS to , testicular cancer, hypertension, thyroid disease, and immunotoxicity in children.

Peaslee and the research team tested products purchased at retail locations in the United States as well as products purchased online in Canada. The study found high levels of fluorine in liquid lipsticks, waterproof mascaras and foundations often advertised as “long-lasting” and “wear-resistant.” Peaslee said this not entirely surprising, given PFAS are often used for their water resistance and film-forming properties.

What is more concerning is that 29 products with high fluorine concentrations were tested further and found to contain between four and 13 specific PFAS, only one of these items tested listed PFAS as an ingredient on the product label.

“This is a red flag,” Peaslee said. “Our measurements indicate widespread use of PFAS in these products—but it’s important to note that the full extent of use of fluorinated chemicals in cosmetics is hard to estimate due to lack of strict labeling requirements in both countries.”

Peaslee’s novel method of detecting PFAS in a wide variety of materials has helped reduce the use of “forever chemicals” in consumer and industrial products.

Following a study from his lab in 2017, fast food chains that discovered their wrappers contained PFAS switched to alternative options. Peaslee continues to receive samples of firefighter turnout gear from fire departments around the world to test for PFAS, and his research has spurred conversations within the firefighter community to eliminate use of “forever chemicals” in various articles of personal protective equipment.

Source: Use of PFAS in cosmetics ‘widespread,’ new study finds

Scientists Create Enzyme That Can Destroy Plastic Within Days, Not Years

[…]

it looks like researchers have developed the perfect thing to combat this problem. They’ve developed a cocktail of plastic-eating enzymes which can actually degrade plastic in a matter of days — something that normally takes hundreds of years to degrade.

The enzyme cocktail includes PETase and MHETase. These are produced by a type of bacteria that feeds on PET plastic (often found in plastic bottles) dubbed Ideonella Sakaiensis.

Professor John McGeehan from the University of Portsmouth, said in a statement to news agency PA, “Currently, we get those building blocks from fossil resources such as oil and gas, which is really unsustainable. But if we can add enzymes to the waste plastic, we can start to break it down in a matter of days.”

plastic eating enzyme Reuters

In 2018, McGeehan was the one who accidentally developed the first enzyme that feasted on plastic. However, the original enzyme was still slower in its process. Researchers from the team were working on different ways they could speed up the process and one such method was fusing a combination of enzymes, making a cocktail of sorts.

McGeehan explains, “PETase attacks the surface of the plastics and MHETase chops things up further, so it seemed natural to see if we could use them together, mimicking what happens in nature. Our first experiments showed that they did indeed work better together, so we decided to try to physically link them.”

He added, “It took a great deal of work on both sides of the Atlantic, but it was worth the effort – we were delighted to see that our new chimeric enzyme is up to three times faster than the naturally evolved separate enzymes, opening new avenues for further improvements.”

plastic waste Reuters

Apart from PET, the enzyme can also help in degrading PEF or polyethene furoate that are found in beer bottles. Sadly these are the only two kinds of plastic it can degrade. However, McGeehan claims that they’re working on trying combinations with other enzymes to bridge this gap.

Source: Scientists Create Enzyme That Can Destroy Plastic Within Days, Not Years

It doesn’t say what the broken down plastic turns into though

New Quantum Microscope Can See Tiny Structures in Living Cells

A team of researchers in Germany and Australia recently used a new microscopy technique to image nano-scale biological structures at a previously unmanageable resolution, without destroying the living cell. The technique, which employs laser light many millions of times brighter than the Sun, has implications for biomedical and navigation technologies.

The quantum optical microscope is an example of how the strange principle of quantum entanglement can feature in real-world applications. Two particles are entangled when their properties are interdependent—by measuring one of them, you can also know the properties of the other.

The sensor in the team’s microscope, described in a paper published today in Science, hinges on quantum light—entangled pairs of photons—to see better-resolved structures without damaging them.

“The key question we answer is whether quantum light can allow performance in microscopes that goes beyond the limits of what is possible using conventional techniques,” said Warwick Bowen, a quantum physicist at the University of Queensland in Australia and co-author of the new study, in an email. Bowen’s team found that, in fact, it can. “We demonstrate [that] for the first time, showing that quantum correlations can allow performance (improved contrast/clarity) beyond the limit due to photodamage in regular microscopes.” By photodamage, Bowen is referring to the way a laser bombardment of photons can degrade or destroy a microscope’s target, similar to the way ants will get crispy under a magnifying glass.

[…]

“Technical hurdles … will need to be overcome before the technology becomes commercial, but this experiment is a proof-of-principle that quantum techniques developed decades ago can and will be deployed to great advantage in the life sciences.”

While other microscopes operating with such intense light end up sizzling holes in what they’re trying to study, the team’s method didn’t. The researchers chemically fingerprinted a yeast cell using Raman scattering, which observes how some photons scatter off a given molecule to understand that molecule’s vibrational signature. Raman microscopes are often used for this sort of fingerprinting, but the whole destroying-the-thing-we’re-trying-to-observe has long vexed researchers trying to see in higher resolutions. In this case, the team could see the cell’s lipid concentrations by using correlated photon pairs to get a great view of the cell without increasing the intensity of the microscope’s laser beam.

[…]

Source: New Quantum Microscope Can See Tiny Structures in Living Cells

Simple Slide Coating Gives a Boost to the Resolution of a Microscope

A light-powered microscope has a resolution limit of around 200 nanometers—which makes observing specimens smaller or closer together than that all but impossible. Engineers at the University of California San Diego have found a clever way to improve the resolution of a conventional microscope, but surprisingly it involves no upgrades to the lenses or optics inside it.

According to the Rayleigh Criterion theory, proposed by John William Strutt, 3rd Baron Rayleigh, back in 1896, a traditional light-based microscope’s resolution is limited by not only the optics capabilities of glass lenses but the nature of light itself, as a result of diffraction that occurs when light rays are bent. The limitation means that an observer looking through the microscope at two objects that are closer than 200 nanometers apart will perceive them as a single object.

Electron microscopes, by comparison, blast a sample with a highly focused beam of electrons instead of visible light, and can instead achieve resolutions of less than a single nanometer. There’s a trade-off, however, as samples being observed through an electron microscope need to be placed inside a vacuum chamber which has the unfortunate downside of killing living things, so observing cells and other living phenomena in action isn’t possible. To date, there hasn’t been an in-between option, but it sounds like that’s exactly what these engineers have created.

“Artistic rendering of the new super resolution microscopy technology. Animal cells (red) are mounted on a slide coated with the multilayer hyperbolic metamaterial. Nanoscale structured light (blue) is generated by the metamaterial and then illuminates the animal cells.”
Artistic rendering of the new super resolution microscopy technology. Animal cells (red) are mounted on a slide coated with the multilayer hyperbolic metamaterial. Nanoscale structured light (blue) is generated by the metamaterial and then illuminates the animal cells.”
Illustration: Yeon Ui Lee – University of California San Diego

To create what’s known as a “super-resolution microscope” the engineers didn’t actually upgrade the microscope at all. Instead, they developed a hyperbolic metamaterial—materials with unique structures that manipulate light, originally developed to improve optical imaging—that’s applied to a microscope slide, onto which the sample is placed. This particular hyperbolic metamaterial is made from “nanometers-thin alternating layers of silver and silica glass” which have the effect of shortening and scattering the wavelengths of visible light that pass through it, resulting in a series of random speckled patterns.

Those speckled light patterns end up illuminating the sample sitting on the microscope slide from different angles, allowing a series of low-resolution images to be captured, each highlighting a different part. Those images are then fed into a reconstruction algorithm which intelligently combines them and spits out a high-resolution image.

Comparison of images taken by a light microscope without the hyperbolic metamaterial (left) and with the hyperbolic metamaterial (right): quantum dots.
Comparison of images taken by a light microscope without the hyperbolic metamaterial (left) and with the hyperbolic metamaterial (right): quantum dots.
Image: University of California San Diego

It’s not unlike the sensor-shift approach used in some digital cameras to produce super-resolution photos where the image sensor is moved ever so slightly in various directions while multiple images are captured and then combined to merge all of the extra details captured. This technology—detailed in a paper recently published in the Nature Communications journal—can boost a conventional light microscope’s resolution to 40 nanometers, while still allowing living organisms to be observed. It still can’t compete with what electron microscopes are capable of, but it’s no less remarkable given how easily it can improve the capabilities of more affordable and safer hardware already in use in labs all around the world.

Source: Simple Slide Coating Gives a Boost to the Resolution of a Microscope

Dark Energy Survey releases most precise look at the universe’s evolution

New results from the Dark Energy Survey use the largest ever sample of galaxies over an enormous piece of the sky to produce the most precise measurements of the universe’s composition and growth to date. Scientists measured that the way matter is distributed throughout the universe is consistent with predictions in the standard cosmological model, the best current model of the universe.

Over the course of six years, DES surveyed 5,000 square degrees — almost one-eighth of the entire sky — in 758 nights of observation, cataloguing hundreds of millions of objects. The results announced today draw on data from the first three years — 226 million galaxies observed over 345 nights — to create the largest and most precise maps yet of the distribution of galaxies in the universe at relatively recent epochs.

[…]

Ordinary matter makes up only about 5% of the universe. Dark energy, which cosmologists hypothesize drives the accelerating expansion of the universe by counteracting the force of gravity, accounts for about 70%. The last 25% is dark matter, whose gravitational influence binds galaxies together. Both dark matter and dark energy remain invisible and mysterious, but DES seeks to illuminate their natures by studying how the competition between them shapes the large-scale structure of the universe over cosmic time.

DES photographed the night sky using the 570-megapixel Dark Energy Camera on the Victor M. Blanco 4-meter Telescope at the Cerro Tololo Inter-American Observatory in Chile, a program of the National Science Foundation’s NOIRLab.

[…]

 

The Dark Energy Survey is a collaboration of more than 400 scientists from 25 institutions in seven countries. For more information about the survey, please visit the experiment’s website.

[…]

Second, DES detected the signature of dark matter through weak gravitational lensing. As light from a distant galaxy travels through space, the gravity of both ordinary and dark matter can bend it, resulting in a distorted image of the galaxy as seen from Earth. By studying how the apparent shapes of distant galaxies are aligned with each other and with the positions of nearby galaxies along the line of sight, DES scientists inferred the spatial distribution (or clumpiness) of the dark matter in the universe.

[…]

The recent DES results will be presented in a scientific seminar on May 27. Twenty-nine papers are available on the arXiv online repository.

A large blue disc with the top portion open has a has a big white instrument sitting at its center. This instrument sits atop a yellow stand. In a large room with a flight of stairs, the whole instrument is large, much taller than the flight of stairs and equally wide.

The Dark Energy Survey photographed the night sky using the 570-megapixel Dark Energy Camera on the Victor M. Blanco 4-meter Telescope at the Cerro Tololo Inter-American Observatory in Chile, a program of the National Science Foundation’s NOIRLab. Photo: Reidar Hahn, Fermilab

Source: Dark Energy Survey releases most precise look at the universe’s evolution

Mice Develop halfway to gestation Inside An Artificial Womb

Although people-growing is probably a long way off, mice can now mostly develop inside an artificial uterus (try private window if you hit a paywall) thanks to a breakthrough in developmental biology. So far, the mice can only be kept alive halfway through gestation. There’s a point at which the nutrient formula provided to them isn’t enough, and they need a blood supply to continue growing. That’s the next goal. For now, let’s talk about that mechanical womb setup.

Carousel of Care

The mechanical womb was developed to better understand how various factors such as gene mutations, nutrients, and environmental conditions affect murine fetuses in development. Why do miscarriages occur, and why do fertilized eggs fail to implant in the first place? How exactly does an egg explode into 40 trillion cells when things do work out? This see-through uterus ought to reveal a few more of nature’s gestational secrets.

 

Dr. Jacob Hanna of Israel’s Weizmann Institute spent seven years building the two-part system of incubators, nutrients, and ventilation. Each mouse embryo floats in a glass jar, suspended in a concoction of liquid nutrients. A carousel of jars slowly spins around night and day to keep the embryos from attaching to the sides of the jars and dying. Along with the nutrient fluid, the mice receive a carefully-controlled mixture of oxygen and carbon dioxide from the ventilation machine. Dr. Hanna and his team have grown over 1,000 embryos this way.

Full gestation in mice takes about 20 days. As outlined in the paper published in Nature, Dr. Hanna and team removed mouse embryos at day five of gestation and were able to grow them for six more days in the artificial wombs. When compared with uterus-grown mice on day eleven, their sizes and weights were identical.  According to an interview after the paper was published, the team have already gone even further, removing  embryos right after fertilization on day zero, and growing them for eleven days inside the mechanical womb. The next step is figuring out how to provide an artificial blood supply, or a more advanced system of nutrients that will let the embryos grow until they become mice.

Embryonic Ethics

Here’s the most interesting part: the team doesn’t necessarily have to disrupt live gestation to get their embryos. New techniques allow embryos to be created from murine connective tissue cells called fibroblasts without needing fertilized eggs. Between this development and Dr. Hanna’s carousel of care, there would no longer be a need to fertilize eggs merely to destroy them later.

It’s easy to say that any and all animal testing is unethical because we can’t exactly get their consent (not that we would necessarily ask for it). At the same time, it’s true that we learn a lot from testing on animals first. Our lust for improved survival is at odds with our general empathy, and survival tends to win out on a long enough timeline. A bunch of people die every year waiting for organ transplants, and scientists are already growing pigs for that express purpose. And unlocking more mysteries of the gestation process make make surrogate pregnancies possible for more animals in the frozen zoo.

In slightly more unnerving news, some have recently created embryos that are part human and part monkey for the same reason. Maybe this is how we get to planet of the apes.

Source: Mice Develop Inside An Artificial Womb | Hackaday

Three ways to improve scholarly writing to get more citations

Researchers from University of Arizona and University of Utah published a new paper in the Journal of Marketing that examines why most scholarly research is misinterpreted by the public or never escapes the ivory tower and suggests that such research gets lost in abstract, technical, and passive prose.

The study, forthcoming in the Journal of Marketing, is titled “Marketing Ideas: How to Write Research Articles that Readers Understand and Cite” and is authored by Nooshin L. Warren, Matthew Farmer, Tiany Gu, and Caleb Warren.

From developing vaccines to nudging people to eat less, scholars conduct research that could change the world, but most of their ideas either are misinterpreted by the public or never escape the ivory tower.

Why does most academic research fail to make an impact? The reason is that many ideas in get lost in an attic of abstract, technical, and passive prose. Instead of describing “spilled coffee” and “one-star Yelp reviews,” scholars discuss “expectation-disconfirmation” and “post-purchase behavior.” Instead of writing “policies that let firms do what they want have increased the gap between the rich and the poor,” scholars write sentences like, “The rationalization of free-market capitalism has been resultant in the exacerbation of inequality.” Instead of stating, “We studied how liberal and conservative consumers respond when brands post polarizing messages on ,” they write, “The interactive effects of ideological orientation and corporate sociopolitical activism on owned media engagement were studied.”

Why is writing like this unclear? Because it is too abstract, technical, and passive. Scholars need abstraction to describe theory. Thus, they write about “sociopolitical activism” rather than Starbucks posting a “Black Lives Matter” meme on Facebook. They are familiar with technical terms, such as “ideological orientation,” and they rely on them rather than using more colloquial terms such as “liberal or conservative.” Scholars also want to sound objective, which lulls them into the passive voice (e.g., the effects… were studied) rather than active writing (e.g., “we studied the effects…”). Scholars need to use some abstract, technical, and passive writing. The problem is that they tend to overuse these practices without realizing it.

When writing is abstract, technical, and passive, readers struggle to understand it. In one of the researchers’ experiments, they asked 255 marketing professors to read the first page of research papers published in the Journal of Marketing (JM), Journal of Marketing Research (JMR), and Journal of Consumer Research (JCR). The professors understood less of the papers that used more abstract, technical, and passive writing compared to those that relied on concrete, non-technical, and active writing.

As Warren explains, “When readers do not understand an article, they are unlikely to read it, much less absorb it and be influenced by its ideas. We saw this when we analyzed the text of 1640 articles published in JM, JMR, and JCR between 2000 and 2010. We discovered that articles that relied more on abstract, technical, and passive writing accumulated fewer citations on both Google Scholar and the Web of Science.” An otherwise average JM article that scored one standard deviation lower (clearer) on our measures of abstract, technical, and passive writing accumulated approximately 157 more Google Scholar citations as of May 2020 than a JM with average writing.

Why do scholars write unclearly? There is an unlikely culprit: knowledge. Conducting good research requires authors to know a lot about their work. It takes years to create research that meaningfully advances scientific knowledge. Consequently, academic articles are written by authors who are intimately familiar with their topics, methods, and results. Authors, however, often forget or simply do not realize that potential readers (e.g., Ph.D. students, scholars in other sub-disciplines, practicing professionals, etc.) are less familiar with the intricacies of the research, a phenomenon called the curse of knowledge.

The research team explores whether the curse of knowledge might be enabling unclear writing by asking Ph.D. students to write about two research projects. The students wrote about one project on which they were the lead researcher and another project led by one of their colleagues. The students reported that they were more familiar with their own research than their colleague’s research. They also thought that they wrote more clearly about their own research, but they were mistaken. In fact, the students used more abstraction, technical language, and passive voice when they wrote about their own research than when they wrote about their colleague’s research.

“To make a greater impact, scholars need to overcome the curse of knowledge so they can package their ideas with concrete, technical, and active writing. Clear writing gives ideas the wings needed to escape the attics, towers, and increasingly narrow halls of their academic niches so that they can reduce infection, curb obesity, or otherwise make the world a better place,” says Farmer.

Source: Three ways to improve scholarly writing to get more citations

Florida Keys Mosquito Control District and Oxitec Announce Site Participation for Florida Keys Pilot Project to Combat Disease Transmitting Mosquito Type

The Florida Keys Mosquito Control District and Oxitec Ltd today announced location participation plans for its landmark Florida Keys pilot project. Project managers anticipate that during the last week of April and first week of May release boxes, non-release boxes and netted quality control boxes will be placed in six locations: two on Cudjoe Key, one on Ramrod Key and three on Vaca Key. Throughout all release locations less than 12,000 mosquitoes are expected to emerge each week for approximately 12 weeks. Untreated comparison sites will be monitored with mosquito traps on Key Colony Beach, Little Torch Key, and Summerland Key.

This marks the start of the US EPA approved project to evaluate this safe, sustainable and environmentally-friendly solution to control the invasive Aedes aegypti mosquito species.

Oxitec’s non-biting male mosquitoes will emerge from the boxes to mate with the local biting female mosquitoes. The female offspring of these encounters cannot survive, and the population of Aedes aegypti is subsequently controlled.

The Aedes aegypti mosquito makes up about four percent of the mosquito population in the Keys but is responsible for virtually all mosquito-borne diseases transmitted to humans. This species of mosquito transmits dengue, Zika, yellow fever and other human diseases, and can transmit heartworm and other potentially deadly diseases to pets and animals.

Source: Florida Keys Mosquito Control District and Oxitec Announce Site Participation for Florida Keys Pilot Project to Combat Disease Transmitting Mosquito — Oxitec

There’s a lot of fear mongering on this one, based on some outright lies and old facts, eg using an old nature article that has since been rescinded, inflating massively the number of mosquitos to be released, saying people aren’t told where the mosquitos will be released (they do tell people, just read above), etc etc. I’m sure that maybe some of their fears are legitimate but throwing in all of this bullshit really weakens their case and makes me too bored to find the hidden gem in the codswallop after I keep factchecking and finding out that the fearmongers are lying yet again.

Satellites show world’s glaciers melting much faster than ever

Glaciers are melting faster, losing 31 percent more snow and ice per year than they did 15 years earlier, according to three-dimensional satellite measurements of all the world’s mountain glaciers.

[…]

Using 20 years of recently declassified satellite data, scientists calculated that the world’s 220,000 mountain glaciers are losing more than 328 billion tons (298 billion metric tons) of ice and snow per year since 2015, according to a study in Wednesday’s journal Nature. That’s enough melt flowing into the world’s rising oceans to put Switzerland under almost 24 feet (7.2 meters) of water each year.

The annual melt rate from 2015 to 2019 is 78 billion more tons (71 billion metric tons) a year than it was from 2000 to 2004. Global thinning rates, different than volume of water lost, doubled in the last 20 years

[…]

Almost all the world’s glaciers are melting, even ones in Tibet that used to be stable, the study found. Except for a few in Iceland and Scandinavia that are fed by increased precipitation, the melt rates are accelerating around the world.

[…]

Source: Satellites show world’s glaciers melting faster than ever

NASA Generates Oxygen on Mars, Setting Stage for Crewed Missions

[…]

On April 20, the MOXIE device on Perseverance produced roughly 5 grams of oxygen. That’s a tiny step for NASA and its rover, but a potentially huge leap for humanity and our aspirations on Mars. This small amount of oxygen—extracted from the carbon dioxide-rich Martian atmosphere—is only enough to sustain an astronaut for about five minutes, but it’s the principle of the experiment that matters. This technology demonstration shows that it’s possible to produce oxygen on Mars, a necessary requirement for sustainably working on and departing the Red Planet.

[…]

“Someday we hope to send people to Mars, but they will have to take an awful lot of stuff with them,” Michael Hecht, the principal investigator of the MOXIE project, explained in an email. “The single biggest thing will be a huge tank of oxygen, about 25 tonnes of it.”

Yikes—that converts to approximately 55,100 pounds, or 25,000 kg.

Some of this oxygen will be for the astronauts to breathe, but the “bulk of it” will be used for the rocket “to take the crew off the planet and start them on their journey home again,” Hecht said.

Hence the importance of the MOXIE experiment. Should we be capable of making that oxygen on Mars, it would “save a lot of money, time, and complexity,” said Hecht, but it’s a “challenging new technology that we can only really test properly if we actually do it on Mars,” and that’s “what MOXIE is for, even though it’s a very small scale model.”

[…]

MOXIE works by separating oxygen from carbon dioxide, leaving carbon monoxide as the waste product.

“MOXIE uses electrical energy to take carbon dioxide molecules, CO2, and separate them into two other types of molecule, carbon monoxide (CO) and oxygen (O2),” Hecht explained. “It uses a technology called electrolysis that is very similar to a fuel cell, except that a fuel cell goes the other way—it starts with fuel and oxygen and combines them to get electrical energy out.”

[…]

When asked what surprised him most about the first test, Hecht said it was the identical performance compared to tests done on Earth.

[…]

 

Source: NASA Generates Oxygen on Mars, Setting Stage for Crewed Missions

Google Earth Now Shows Decades of Climate Change in Seconds

Google Earth has partnered with NASA, the U.S. Geological Survey, the EU’s Copernicus Climate Change Service, and Carnegie Mellon University’s CREATE Lab to bring users time-lapse images of the planet’s surface—24 million satellite photos taken over 37 years. Together they offer photographic evidence of a planet changing faster than at any time in millennia. Shorelines creep in. Cities blossom. Trees fall. Water reservoirs shrink. Glaciers melt and fracture.

“We can objectively see global warming with our own eyes,” said Rebecca Moore, director of Google Earth. “We hope that this can ground everyone in an objective, common understanding of what’s actually happening on the planet, and inspire action.”

Timelapse, the name of the new Google Earth feature, is the largest video on the planet, according to a statement from the company, requiring 2 million hours to process in cloud computers, and the equivalent of 530,000 high-resolution videos. The tool stitches together nearly 50 years of imagery from the U.S.’s Landsat program, which is run by NASA and the USGS. When combined with images from complementary European Sentinel-2 satellites, Landsat provides the equivalent of complete coverage of the Earth’s surface every two days. Google Earth is expected to update Timelapse about once a year.

The Timelapse images are stark. In Southwestern Greenland, warmer Atlantic waters and air temperatures are accelerating ice melt.

relates to Google Earth Now Shows Decades of Climate Change in Seconds
Claushavn, Greenland
Source: Google

Tree loss in Brazil in 2020 surged by a quarter over the prior year.

relates to Google Earth Now Shows Decades of Climate Change in Seconds
Mamoré River, Brazil
Source: Google

Solar farms are rising in China.

relates to Google Earth Now Shows Decades of Climate Change in Seconds
Longyangxia Solar Park, located in Gonghe County, Qinghai Province.
Source: Google

This image, below, illustrates what it took to make a viewable experience. The 24 million images had to be processed to remove clouds or other obstructions and then stitched together into the final product.

relates to Google Earth Now Shows Decades of Climate Change in Seconds
Twenty-four million satellite images from 1984 to 2020 were analyzed to identify and remove artifacts, like clouds.
Source: Google

“Now, our one, static snapshot of the planet”—Google Earth—“has become dynamic, providing ongoing visual evidence of Earth’s changes from climate and human behavior occurring across space and time, over four decades,” Moore said. “And this was made possible because of the U.S. government and European Union’s commitments to open and accessible data.”

Source: Google Earth Now Shows Decades of Climate Change in Seconds – Bloomberg

New Treatment Makes Teeth Grow Back

A new experimental treatment could someday give people a way to grow missing teeth, if early research on lab animals holds up.

Scientists at Japan’s Kyoto University and the University of Fukui developed a monoclonal antibody treatment that seems to trigger the body to grow new teeth, according to research published last month in the journal Science Advances. If upcoming experiments continue to work, it could eventually give us a way to regrow teeth lost in adulthood or those that were missing since childhood due to congenital conditions.

[…]

eventually the team found that blocking a gene called USAG-1 led to increased activity of Bone Morphogenic Protein (BMP), a molecule that determines how many teeth will grow in the first place, and allowed adult mice to regrow any that they were missing.

The experiment also worked on ferrets, which the researchers say is important because their teeth are far more humanlike than mouse teeth are.

“Ferrets are diphyodont animals with similar dental patterns to humans,” Kyoto researcher and lead study author Katsu Takahashi said in the press release. “Our next plan is to test the antibodies on other animals such as pigs and dogs.”

There’s still a long way to go before they reach human trials, but continued success in those upcoming trials would be a promising sign for the future of a clinical treatment that lets us naturally regrow our missing teeth.

Source: New Treatment Makes Teeth Grow Back