Scientists discover microbes in the Alps and Arctic that can digest plastic at low temperatures

Finding, cultivating, and bioengineering organisms that can digest plastic not only aids in the removal of pollution, but is now also big business. Several microorganisms that can do this have already been found, but when their enzymes that make this possible are applied at an industrial scale, they typically only work at temperatures above 30°C.

 

The heating required means that industrial applications remain costly to date, and aren’t carbon-neutral. But there is a possible solution to this problem: finding specialist cold-adapted microbes whose enzymes work at lower temperatures.

Scientists from the Swiss Federal Institute WSL knew where to look for such microorganisms: at high altitudes in the Alps of their country, or in the polar regions. Their findings are published in Frontiers in Microbiology.

“Here we show that novel microbial taxa obtained from the ‘plastisphere’ of alpine and arctic soils were able to break down at 15°C,” said first author Dr. Joel Rüthi, currently a guest scientist at WSL. “These organisms could help to reduce the costs and environmental burden of an enzymatic recycling process for .”

[…]

None of the strains were able to digest PE, even after 126 days of incubation on these plastics. But 19 (56%) of strains, including 11 fungi and eight bacteria, were able to digest PUR at 15°C, while 14 fungi and three bacteria were able to digest the plastic mixtures of PBAT and PLA. Nuclear Magnetic Resonance (NMR) and a fluorescence-based assay confirmed that these strains were able to chop up the PBAT and PLA polymers into smaller molecules.

[…]

The best performers were two uncharacterized fungal species in the genera Neodevriesia and Lachnellula: these were able to digest all of the tested plastics except PE.

[…]

The best performers were two uncharacterized fungal species in the genera Neodevriesia and Lachnellula: these were able to digest all of the tested plastics except PE.

Source: Scientists discover microbes in the Alps and Arctic that can digest plastic at low temperatures

OpenAI attempts to use Language models can explain neurons in language models, open source

[…]

One simple approach to interpretability research is to first understand what the individual components (neurons and attention heads) are doing. This has traditionally required humans to manually inspect neurons to figure out what features of the data they represent. This process doesn’t scale well: it’s hard to apply it to neural networks with tens or hundreds of billions of parameters. We propose an automated process that uses GPT-4 to produce and score natural language explanations of neuron behavior and apply it to neurons in another language model.

This work is part of the third pillar of our approach to alignment research: we want to automate the alignment research work itself. A promising aspect of this approach is that it scales with the pace of AI development. As future models become increasingly intelligent and helpful as assistants, we will find better explanations.

How it works

Our methodology consists of running 3 steps on every neuron.

[…]

Step 1: Generate explanation using GPT-4

Given a GPT-2 neuron, generate an explanation of its behavior by showing relevant text sequences and activations to GPT-4.

[…]

Step 2: Simulate using GPT-4

Simulate what a neuron that fired for the explanation would do, again using GPT-4

[…]

Step 3: Compare

Score the explanation based on how well the simulated activations match the real activations

[…]

What we found

Using our scoring methodology, we can start to measure how well our techniques work for different parts of the network and try to improve the technique for parts that are currently poorly explained. For example, our technique works poorly for larger models, possibly because later layers are harder to explain.

1e+51e+61e+71e+81e+90.020.030.040.050.060.070.080.090.100.110.12

Parameters in model being interpretedExplanation scoreScores by size of the model being interpreted

Although the vast majority of our explanations score poorly, we believe we can now use ML techniques to further improve our ability to produce explanations. For example, we found we were able to improve scores by:

  • Iterating on explanations. We can increase scores by asking GPT-4 to come up with possible counterexamples, then revising explanations in light of their activations.
  • Using larger models to give explanations. The average score goes up as the explainer model’s capabilities increase. However, even GPT-4 gives worse explanations than humans, suggesting room for improvement.
  • Changing the architecture of the explained model. Training models with different activation functions improved explanation scores.

We are open-sourcing our datasets and visualization tools for GPT-4-written explanations of all 307,200 neurons in GPT-2, as well as code for explanation and scoring using publicly available models on the OpenAI API. We hope the research community will develop new techniques for generating higher-scoring explanations and better tools for exploring GPT-2 using explanations.

We found over 1,000 neurons with explanations that scored at least 0.8, meaning that according to GPT-4 they account for most of the neuron’s top-activating behavior. Most of these well-explained neurons are not very interesting. However, we also found many interesting neurons that GPT-4 didn’t understand. We hope as explanations improve we may be able to rapidly uncover interesting qualitative understanding of model computations.

Source: Language models can explain neurons in language models

Coqui.ai Text to Speech library – create your own voice

🐸TTS is a library for advanced Text-to-Speech generation. It’s built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. 🐸TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.

Github page: https://github.com/coqui-ai/TTS