Australian airline Qantas issued standing orders to its pilots last week advising them that some of its fleet experienced interference on VHF stations from sources purporting to be the Chinese Military.
The Register has confirmed the reports.
The interference has been noticed in the western Pacific and South China Sea. Qantas has advised its crew to continue their assigned path and report interference to the controlling air traffic control authority.
The airline also has stated there have been no reported safety events.
Qantas operations order – Click to enlarge
Qantas’ warning follows a similar one from the International Federation of Air Line Pilots’ Associations (IFALPA) issued on March 2nd.
IFALPA said it “been made aware of some airlines and military aircraft being called over 121.50 or 123.45 by military warships in the Pacific region, notably South China Sea, Philippine Sea, East of Indian Ocean.” According to the org, some flights contacted by the warships were provided vectors to avoid the airspace.
But while interfering with VHF can be disruptive, what is more concerning is the IFALPA said it has “reason to believe there may be interferences to GNSS and RADALT as well.”
RADLT is aviation jargon for radar altimeter – an instrument that tells pilots how far they are above ground. So they can avoid hitting it. GNSS is the Global Navigation Satellite System.
GNSS Jamming navigation systems or radar altimeters can greatly disorientate a pilot or worse.
Of course, there is no telling if China is merely testing out its capabilities, performing these actions as a show of power, or has a deeper motive.
IFALPA recommended pilots who experience interference do not respond to warships, notify dispatchers and relevant air traffic control, and complete necessary reports.
China has asserted more control over Asia Pacific waters. Outgoing Micronesian president David Panuelo recently accused Beijing of sending warnings to stay away from its ships when entered his country’s territory. In an explosive letter, Panuelo said China also attempted to take control of the nation’s submarine cables and telecoms infrastructure.
RGB on your PC is cool, it’s beautiful and can be quite nuts but it’s also quite complex and trying to get it to do what you want it to isn’t always easy. This article is the result of many many reboots and much Googling.
I set up a PC with 2×3 Lian Li Unifan SL 120 (top and side), 2 Lian Li Strimmer cables (an ATX and a PCIe), a NZXT Kraken Z73 CPU cooler (with LED screen, but cooled by the Lian Li Unifan SL 120 on the side, not the NZXT fans that came with it), 2 RGB DDR5 DRAMs, an ASUS ROG Geforce 2070 RTX Super, a Asus ROG Strix G690-F Gaming wifi and a Corsair K95 RGB Keyboard.
Happy rainbow colours! It seems to default to this every time I change stuff
It’s no mean feat doing all the wiring on the fan controllers nowadays, and the instructions don’t make it much easier. Here is the wiring setup for this (excluding the keyboard)
The problem is that all of this hardware comes with it’s own bloated, janky software in order to get it to do stuff.
ASUS: Armory Crate / ASUS AURA
This thing takes up loads of memory and breaks often.
I decided to get rid of it once it had problems updating my drivers. You can still download Aura seperately (although there is a warning it will no longer be updated). To uninstall Armory Crate you can’t just uninstall everything from Add or Remove Programs, you need the uninstall tool, so it will also get rid of the scheduled tasks and a directory the windows uninstallers leave behind.
Once you install Aura seperately, it still takes an inane amount of processes, but you don’t actually need to run Aura to change the RGBs on the VGA and DRAM. Oddly enough not the motherboard itself though.
Just running AURA, not Armory Crate
You also can use other programs. Theoretically. That’s what the rest of this article is about. But in the end, I used Aura.
If you read on, it may be the case that I can’t get a lot of the other stuff to work because I don’t have Armory Crate installed. Nothing will work if I don’t have Aura installed, so I may as well use that.
Note: if you want to follow your driver updates, there’s a thread on the Republic of Gamers website that follows a whole load of them.
Problem I never solved: getting the Motherboard itself to show under Aura.
Corsiar: iCUE
Yup, this takes up memory, works pretty well, keeps updating for no apparent reason and I have to slide the switch left and right to get it to detect as a USB device quite often so the lighting works again. In terms of interface it’s quite easy to use.
Woohoo! all these processes for keyboard lighting!
It detects the motherboard and can monitor the motherboard, but can’t control the lighting on it. Once upon a time it did. Maybe this is because I’m not running the whole Armory Crate thing any more.
No idea.
Note: if you do put everything on in the dashboard, memory usage goes up to 500 MB
In fact, just having the iCUE screen open uses up ~200MB of memory.
It’s the most user friendly way of doing keyboard lighting effects though, so I keep it.
When I first started running it, it told me I needed to run it as an administrator to get a driver working. I ran it and it hung my computer at device detection. Later on it started rebooting it. After installing the underlying Asus Aura services running it ran for me. [Note: the following is for the standard 0.8 build: Once. It reboots my PC after device detection now. Lots of people on Reddit have it working, maybe it needs the Aura Crate software. I have opened an issue, hopefully it will get fixed? According to a Reddit user, this could be because “If you have armoury crate installed, OpenRGB cannot detect your motherboard, if your ram is ddr5 [note: which mine is], you’ll gonna have to wait or download the latest pipeline version”]
OK, so the Pipeline build does work and even detects my motherboard! Unfortunately it doesn’t write the setting to the motherboard, so after a reboot it goes back to rainbow. After my second attempt the setting seems to have stuck and survived the reboot. However it still hangs the computer on a reboot (everything turns off except the PC itself) and It can take quite some time to open the interface. It also sometimes does and sometimes doesn’t detect the DRAM modules. Issue opened here
Even with the interace open, the memory footprint is tiny!
Note that it saves the settings to C:\Users\razor\AppData\Roaming\OpenRGB an you can find the logs there too.
SignalRGB
This looks quite good at first glance – it detected my devices and was able to apply effects to all of them at once. Awesome! Unfortunately it has a huge memory footprint (around 600MB!) and doesn’t write the settings to the devices, so if after a reboot you don’t run SignalRGB the hardware won’t show any lighting at all, they will all be turned off.
It comes in a free tier with mostly anything you need and a paid subscription tier, which costs $4,- per month = $48,- per year! Considering what this does and the price of most of these kind of one trick pony utils (one time fee ~ $20) this is incredibly high. On Reddit the developers are aggressive in saying they need to keep developing in order to support new hardware and if you think they are charging a lot of money for this you are nuts. Also, in order to download the free effects you need an account with them.
So nope, not using this.
JackNet RGBSync
Another Open Source RGB software, I got it to detect my keyboard and not much else. Development has stopped in 2020. The UI leaves a lot to be desired.
Gigabyte RGB Fusion
Googling alternatives to Aura, you will run into this one. It’s not compatible with my rig and doesn’t detect anything. Not really too surprising, considering my stuff is all their competitor, Asus.
L-Connect 2 and 3
For the Lian Li fans and the Strimmer cables I use L-Connect 2. It has a setting saying it should take over the motherboard setting, but this has stopped working. Maybe I need Armory Crate. It’s a bit clunky (to change settings you need to select which fans in the array you want to send an effect to and it always shows 4 arrays of 4 fans, which I don’t actually have), but it writes settings to the devices so you don’t need it running in the background.
L-Connect 3 runs extremely slowly. It’s not hung, it’s just incredibly slow. Don’t know why, but could be Armory Crate related.
NZXT CAM
This you need in the background or the LED screen on the Kraken will show the default: CPU temperature only. It takes a very long time to start up. It also requires quite a bit of memory to run, which is pretty bizarre if all you want to do is show a few animated GIFs on your CPU cooler in carousel mode
Interface up on the screenRunning in the background
So, it’s shit but you really really need it if you want the display on the CPU cooler to work.
Fan Control
So not really RGB, but related, is Fan Control for Windows
Also G-helper works for fan control and gpu switching
Conclusion
None of the alternatives really works very well for me. None of them can control the Lian-Li strimmer devices and most of them only control a few of them or have prohibitive licenses attached for what they are. What is more, in order to use the alternatives, you still need to install the ASUS motherboard driver, which is exactly what I had been hoping to avoid. OpenRGB shows the most promise but is still not quite there yet – but it does work for a lot of people, so hopefully this will work for you too. Good luck and prepare to reboot… A lot!
[…] “With the help of a quantum annealer, we demonstrated a new way to pattern magnetic states,” said Alejandro Lopez-Bezanilla, a virtual experimentalist in the Theoretical Division at Los Alamos National Laboratory. Lopez-Bezanilla is the corresponding author of a paper about the research in Science Advances.
“We showed that a magnetic quasicrystal lattice can host states that go beyond the zero and one bit states of classical information technology,” Lopez-Bezanilla said. “By applying a magnetic field to a finite set of spins, we can morph the magnetic landscape of a quasicrystal object.”
[…]
Lopez-Bezanilla selected 201 qubits on the D-Wave computer and coupled them to each other to reproduce the shape of a Penrose quasicrystal.
Since Roger Penrose in the 1970s conceived the aperiodic structures named after him, no one had put a spin on each of their nodes to observe their behavior under the action of a magnetic field.
“I connected the qubits so all together they reproduced the geometry of one of his quasicrystals, the so-called P3,” Lopez-Bezanilla said. “To my surprise, I observed that applying specific external magnetic fields on the structure made some qubits exhibit both up and down orientations with the same probability, which leads the P3 quasicrystal to adopt a rich variety of magnetic shapes.”
Manipulating the interaction strength between qubits and the qubits with the external field causes the quasicrystals to settle into different magnetic arrangements, offering the prospect of encoding more than one bit of information in a single object.
Some of these configurations exhibit no precise ordering of the qubits’ orientation.
“This can play in our favor,” Lopez-Bezanilla said, “because they could potentially host a quantum quasiparticle of interest for information science.” A spin quasiparticle is able to carry information immune to external noise.
A quasiparticle is a convenient way to describe the collective behavior of a group of basic elements. Properties such as mass and charge can be ascribed to several spins moving as if they were one.
AI software capable of automatically generating images or text from an input prompt or instruction has made it easier for people to churn out content. Correspondingly, the USCO has received an increasing number of applications to register copyright protections for material, especially artwork, created using such tools.
US law states that intellectual property can be copyrighted only if it was the product of human creativity, and the USCO only acknowledges work authored by humans at present. Machines and generative AI algorithms, therefore, cannot be authors, and their outputs are not copyrightable.
Digital art, poems, and books generated using tools like DALL-E, Stable Diffusion, Midjourney, ChatGPT, or even the newly released GPT-4 will not be protected by copyright if they were created by humans using only a text description or prompt, USCO director Shira Perlmutter warned.
“If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it,” she wrote in a document outlining copyright guidelines.
“For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology – not the human user.
“Instead, these prompts function more like instructions to a commissioned artist – they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.”
The USCO will consider content created using AI if a human author has crafted something beyond the machine’s direct output. A digital artwork that was formed from a prompt, and then edited further using Photoshop, for example, is more likely to be accepted by the office. The initial image created using AI would not be copyrightable, but the final product produced by the artist might be.
Thus it would appear the USCO is simply saying: yes, if you use an AI-powered application to help create something, you have a reasonable chance at applying for copyright, just as if you used non-AI software. If it’s purely machine-made from a prompt, you need to put some more human effort into it.
In a recent case, officials registered a copyright certificate for a graphic novel containing images created using Midjourney. The overall composition and words were protected by copyright since they were selected and arranged by a human, but the individual images themselves were not.
“In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of ‘mechanical reproduction’ or instead of an author’s ‘own original mental conception, to which [the author] gave visible form’. The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry,” the USCO declared.
Perlmutter urged people applying for copyright protection for any material generated using AI to state clearly how the software was used to create the content, and show which parts of the work were created by humans. If they fail to disclose this information accurately, or try to hide the fact it was generated by AI, USCO will cancel their certificate of registration and their work may not be protected by copyright law.
[…]SCOPE Europe is now accredited by the Dutch Data Protection Authority as the monitoring body of the Data Pro Code. On this occasion, SCOPE Europe celebrates its success in obtaining its second accreditation and looks forward to continuing its work on fostering trust in the digital economy.
When we were approached by NLdigital, the creators of the Data Pro Code, we knew that taking on the monitoring of a national code of conduct would be an exciting endeavor. As the first-ever accredited monitoring body for a transnational GDPR code of conduct, SCOPE Europe has built unique expertise in the field and are proud, to further apply in the context of another co-regulatory initiative.
The Code puts forward an accessible compliance framework for companies of all sizes, including micro, small and medium enterprises in the Netherlands. With the approval and now the accreditation of its monitoring body, the Data Pro Code will enable data processors to demonstrate GDPR compliance and boost transparency within the digital industry.
CivitAI is an AI image generator that isn’t hosted in the US, allowing for much more freedom of creation. It’s a really amazing system that gives Midjourney and DALL-E a run for their money.
Civitai is a platform that makes it easy for people to share and discover resources for creating AI art. Our users can upload and share custom models that they’ve trained using their own data, or browse and download models created by other users. These models can then be used with AI art software to generate unique works of art.
Cool, what’s a “Model?”
Put simply, a “model” refers to a machine learning algorithm or set of algorithms that have been trained to generate art or media in a particular style. This can include images, music, video, or other types of media.
To create a model for generating art, a dataset of examples in the desired style is first collected and used to train the model. The model is then able to generate new art by learning patterns and characteristics from the examples it was trained on. The resulting art is not an exact copy of any of the examples in the training dataset, but rather a new piece of art that is influenced by the style of the training examples.
Models can be trained to generate a wide range of styles, from photorealistic images to abstract patterns, and can be used to create art that is difficult or time-consuming for humans to produce manually.
On Wednesday, Midjourney announced version 5 of its commercial AI image-synthesis service, which can produce photorealistic images at a quality level that some AI art fans are calling creepy and “too perfect.” Midjourney v5 is available now as an alpha test for customers who subscribe to the Midjourney service, which is available through Discord.
“MJ v5 currently feels to me like finally getting glasses after ignoring bad eyesight for a little bit too long,” said Julie Wieland, a graphic designer who often shares her Midjourney creations on Twitter. “Suddenly you see everything in 4k, it feels weirdly overwhelming but also amazing.”
[…]
Midjourney works similarly to image synthesizers like Stable Diffusion and DALL-E in that it generates images based on text descriptions called “prompts” using an AI model trained on millions of works of human-made art. Recently, Midjourney was at the heart of a copyright controversy regarding a comic book that used earlier versions of the service.
After experimenting with v5 for a day, Wieland noted improvements that include “incredibly realistic” skin textures and facial features; more realistic or cinematic lighting; better reflections, glares, and shadows; more expressive angles or overviews of a scene, and “eyes that are almost perfect and not wonky anymore.”
And, of course, the hands.
[…]
Midjourney works similarly to image synthesizers like Stable Diffusion and DALL-E in that it generates images based on text descriptions called “prompts” using an AI model trained on millions of works of human-made art. Recently, Midjourney was at the heart of a copyright controversy regarding a comic book that used earlier versions of the service.
After experimenting with v5 for a day, Wieland noted improvements that include “incredibly realistic” skin textures and facial features; more realistic or cinematic lighting; better reflections, glares, and shadows; more expressive angles or overviews of a scene, and “eyes that are almost perfect and not wonky anymore.”
A lawsuit filed against eufy security cam maker Anker Tech claims the biz assigns “unique identifiers” to the faces of any person who walks in front of its devices – and then stores that data in the cloud, “essentially logging the locations of unsuspecting individuals” when they stroll past.
[…]
All three suits allege Anker falsely represented that its security cameras stored all data locally and did not upload that data to the cloud.
Moore went public with his claims in November last year, alleging video and audio captured by Anker’s eufy security cams could be streamed and watched by any stranger using VLC media player, […]
In a YouTube video, the complaint details, Moore allegedly showed how the “supposedly ‘private,’ ‘stored locally’, ‘transmitted only to you’ doorbell is streaming to the cloud – without cloud storage enabled.”
He claimed the devices were uploading video thumbnails and facial recognition data to Anker’s cloud server, despite his never opting into Anker’s cloud services and said he’d found a separate camera tied to a different account could identify his face with the same unique ID.
The security researcher alleged at the time this showed that Anker was not only storing facial-recog data in the cloud, but also “sharing that back-end information between accounts” lawyers for the two other, near-identical lawsuits claim.
[…]
According to the complaint [PDF], eufy’s security cameras are marketed as “private” and as “local storage only” as a direct alternative to Anker’s competitors that require the use of cloud storage.
Desai’s complaint goes on to claim:
Not only does Anker not keep consumers’ information private, it was further revealed that Anker was uploading facial recognition data and biometrics to its Amazon Web Services cloud without encryption.
In fact, Anker has been storing its customers’ data alongside a specific username and other identifiable information on its AWS cloud servers even when its “eufy” app reflects the data has been deleted. …. Further, even when using a different camera, different username, and even a different HomeBase to “store” the footage locally, Anker is still tagging and linking a user’s facial ID to their picture across its camera platform. Meaning, once recorded on one eufy Security Camera, those same individuals are recognized via their biometrics on other eufy Security Cameras.
In an unrelated incident in 2021, a “software bug” in some of the brand’s 1080p Wi-Fi-connected Eufycams cams sent feeds from some users’ homes to other Eufycam customers, some of whom were in other countries at the time.
[…]
Cameras made by Anker cited by the complaint include:
EufyCam; eufyCam I; eufyCam 2; eufyCam 2C; eufyCam 2 Pro; eufyCam 2C Pro; Solo IndoorCam; Solo OutdoorCam; SoloCams E20, E40, L20, L40 & S40; Video Doorbell (wired); Video Doorbell (Battery); Video Dual Doorbell (Wired); Floodlight Cam 2K; Floodlight Cam 2 E 2k; Floodlight Cam 2 Pro; and 4G Starlight Camera.
Codon is a new “high-performance Python compiler that compiles Python code to native machine code without any runtime overhead,” according to its README file on GitHub. Typical speedups over Python are on the order of 10-100x or more, on a single thread. Codon’s performance is typically on par with (and sometimes better than) that of C/C++. Unlike Python, Codon supports native multithreading, which can lead to speedups many times higher still.
Its development team includes researchers from MIT’s Computer Science and Artificial Intelligence lab, according to this announcement from MIT shared by long-time Slashdot reader Futurepower(R): The compiler lets developers create new domain-specific languages (DSLs) within Python — which is typically orders of magnitude slower than languages like C or C++ — while still getting the performance benefits of those other languages. “We realized that people don’t necessarily want to learn a new language, or a new tool, especially those who are nontechnical. So we thought, let’s take Python syntax, semantics, and libraries and incorporate them into a new system built from the ground up,” says Ariya Shajii SM ’18, PhD ’21, lead author on a new paper about the team’s new system, Codon. “The user simply writes Python like they’re used to, without having to worry about data types or performance, which we handle automatically — and the result is that their code runs 10 to 100 times faster than regular Python. Codon is already being used commercially in fields like quantitative finance, bioinformatics, and deep learning.”
The team put Codon through some rigorous testing, and it punched above its weight. Specifically, they took roughly 10 commonly used genomics applications written in Python and compiled them using Codon, and achieved five to 10 times speedups over the original hand-optimized implementations…. The Codon platform also has a parallel backend that lets users write Python code that can be explicitly compiled for GPUs or multiple cores, tasks which have traditionally required low-level programming expertise…. Part of the innovation with Codon is that the tool does type checking before running the program. That lets the compiler convert the code to native machine code, which avoids all of the overhead that Python has in dealing with data types at runtime.
Upon first glance, the Unconventional Computing Laboratory looks like a regular workspace, with computers and scientific instruments lining its clean, smooth countertops. But if you look closely, the anomalies start appearing. A series of videos shared with PopSci show the weird quirks of this research: On top of the cluttered desks, there are large plastic containers with electrodes sticking out of a foam-like substance, and a massive motherboard with tiny oyster mushrooms growing on top of it.
[…]
Why? Integrating these complex dynamics and system architectures into computing infrastructure could in theory allow information to be processed and analyzed in new ways. And it’s definitely an idea that has gained ground recently, as seen through experimental biology-based algorithms and prototypes of microbe sensors and kombucha circuit boards.
In other words, they’re trying to see if mushrooms can carry out computing and sensing functions.
A mushroom motherboard. Andrew Adamatzky
With fungal computers, mycelium—the branching, web-like root structure of the fungus—acts as conductors as well as the electronic components of a computer. (Remember, mushrooms are only the fruiting body of the fungus.) They can receive and send electric signals, as well as retain memory.
“I mix mycelium cultures with hemp or with wood shavings, and then place it in closed plastic boxes and allow the mycelium to colonize the substrate, so everything then looks white,” says Andrew Adamatzky, director of the Unconventional Computing Laboratory at the University of the West of England in Bristol, UK. “Then we insert electrodes and record the electrical activity of the mycelium. So, through the stimulation, it becomes electrical activity, and then we get the response.” He notes that this is the UK’s only wet lab—one where chemical, liquid, or biological matter is present—in any department of computer science.
Preparing to record dynamics of electrical resistance of hemp shaving colonized by oyster fungi. Andrew Adamatzky
The classical computers today see problems as binaries: the ones and zeros that represent the traditional approach these devices use. However, most dynamics in the real world cannot always be captured through that system. This is the reason why researchers are working on technologies like quantum computers (which could better simulate molecules) and living brain cell-based chips (which could better mimic neural networks), because they can represent and process information in different ways, utilizing a series of complex, multi-dimensional functions, and provide more precise calculations for certain problems.
Already, scientists know that mushrooms stay connected with the environment and the organisms around them using a kind of “internet” communication. You may have heard this referred to as the wood wide web. By deciphering the language fungi use to send signals through this biological network, scientists might be able to not only get insights about the state of underground ecosystems, and also tap into them to improve our own information systems.
An illustration of the fruit bodies of Cordyceps fungi. Irina Petrova Adamatzky
Mushroom computers could offer some benefits over conventional computers. Although they can’t ever match the speeds of today’s modern machines, they could be more fault tolerant (they can self-regenerate), reconfigurable (they naturally grow and evolve), and consume very little energy.
Before stumbling upon mushrooms, Adamatzky worked on slime mold computers—yes, that involves using slime mold to carry out computing problems—from 2006 to 2016. Physarum, as slime molds are called scientifically, is an amoeba-like creature that spreads its mass amorphously across space.
Slime molds are “intelligent,” which means that they can figure out their way around problems, like finding the shortest path through a maze without programmers giving them exact instructions or parameters about what to do. Yet, they can be controlled as well through different types of stimuli, and be used to simulate logic gates, which are the basic building blocks for circuits and electronics.
Recording electrical potential spikes of hemp shaving colonized by oyster fungi. Andrew Adamatzky
Much of the work with slime molds was done on what are known as “Steiner tree” or “spanning tree” problems that are important in network design, and are solved by using pathfinding optimization algorithms. “With slime mold, we imitated pathways and roads. We even published a book on bio-evaluation of the road transport networks,” says Adamatzky “Also, we solved many problems with computation geometry. We also used slime molds to control robots.”
When he had wrapped up his slime mold projects, Adamatzky wondered if anything interesting would happen if they started working with mushrooms, an organism that’s both similar to, and wildly different from, Physarum. “We found actually that mushrooms produce action potential-like spikes. The same spikes as neurons produce,” he says. “We’re the first lab to report about spiking activity of fungi measured by microelectrodes, and the first to develop fungal computing and fungal electronics.”
An example of how spiking activity can be used to make gates. Andrew Adamatzky
In the brain, neurons use spiking activities and patterns to communicate signals, and this property has been mimicked to make artificial neural networks. Mycelium does something similar. That means researchers can use the presence or absence of a spike as their zero or one, and code the different timing and spacing of the spikes that are detected to correlate to the various gates seen in computer programming language (or, and, etc). Further, if you stimulate mycelium at two separate points, then conductivity between them increases, and they communicate faster, and more reliably, allowing memory to be established. This is like how brain cells form habits.
Mycelium with different geometries can compute different logical functions, and they can map these circuits based on the electrical responses they receive from it. “If you send electrons, they will spike,” says Adamatzky. “It’s possible to implement neuromorphic circuits… We can say I’m planning to make a brain from mushrooms.”
Hemp shavings in the shaping of a brain, injected with chemicals. Andrew Adamatzky
So far, they’ve worked with oyster fungi (Pleurotus djamor), ghost fungi (Omphalotus nidiformis), bracket fungi (Ganoderma resinaceum), Enoki fungi (Flammulina velutipes), split gill fungi (Schizophyllum commune) and caterpillar fungi (Cordyceps militari).
“Right now it’s just feasibility studies. We’re just demonstrating that it’s possible to implement computation, and it’s possible to implement basic logical circuits and basic electronic circuits with mycelium,” Adamatzky says. “In the future, we can grow more advanced mycelium computers and control devices.”
On Friday, a software developer named Georgi Gerganov created a tool called “llama.cpp” that can run Meta’s new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly). If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it. […]
Typically, running GPT-3 requires several datacenter-class A100 GPUs (also, the weights for GPT-3 are not public), but LLaMA made waves because it could run on a single beefy consumer GPU. And now, with optimizations that reduce the model size using a technique called quantization, LLaMA can run on an M1 Mac or a lesser Nvidia consumer GPU. After obtaining the LLaMA weights ourselves, we followed [independent AI researcher Simon Willison’s] instructions and got the 7B parameter version running on an M1 Macbook Air, and it runs at a reasonable rate of speed. You call it as a script on the command line with a prompt, and LLaMA does its best to complete it in a reasonable way.
There’s still the question of how much the quantization affects the quality of the output. In our tests, LLaMA 7B trimmed down to 4-bit quantization was very impressive for running on a MacBook Air — but still not on par with what you might expect from ChatGPT. It’s entirely possible that better prompting techniques might generate better results. Also, optimizations and fine-tunings come quickly when everyone has their hands on the code and the weights — even though LLaMA is still saddled with some fairly restrictive terms of use. The release of Alpaca today by Stanford proves that fine tuning (additional training with a specific goal in mind) can improve performance, and it’s still early days after LLaMA’s release. A step-by-step instruction guide for running LLaMA on a Mac can be found here (Warning: it’s fairly technical).
A Netherlands museum is facing criticism for selecting an AI-generated piece of art to temporarily take the place of the renowned Girl with a Pearl Earring painting. The artwork was created by Johannes Vermeer in 1665 and is usually located at the Mauritshuis Museum but is on loan at the Rijksmuseum in Amsterdam until June 4.
In the interim, the Mauritshuis Museum held a competition for local artists to submit their own versions of the Girl with a Pearl Earring painting and said it would select one of the submissions to take Vermeer’s place until the painting is returned. While the competition may have seemed like a straightforward and exciting process, when the museum selected an AI-generated piece of art showing the girl with more structured and sharp outlines and glowing earrings, the art community erupted with complaints.
Of the roughly 3,480 artworks submitted, Berlin-based artist, Julian van Dieken, was one of five winners selected, and whose so-called painting is receiving backlash from artists and lovers of the painting.
[…]
When asked for comment, the Mauritshuis Museum directed Gizmodo to a statement on its website which said they did not choose the winners by looking at what was the “most beautiful” or “best” submission. “For us, the starting point has always been that the maker has been inspired by Johannes Vermeer’s world-famous painting. And that can be in the most diverse ways in image or technique.”
Sounds like art doing what art should be doing – pushing culture and perceptions. Making people think. Just a shame the village idiot squad angle is pushed by Gizmodo. Well done the Mauritshuis!
Google has promised to offer API-level access to its large language model PaLM so that developers can build it into their apps and workflows, and thus make the ChatGPT-like text-emitting tech available to world-plus-dog.
The web giant is also threatening to bake the model’s content-generating capabilities into Google Docs, Gmail, and more.
[…]
On Tuesday, Google unveiled its PaLM API, opening up its text-generating large language model to developers looking to boost their applications with auto-generated machine-made writing and other stuff. It’s capable of summarizing and classifying text, acting as a support chat bot that interacts with folks on behalf of your organization, and other things, just like the other APIs out there from OpenAI, Cohere, and AI21 Labs.
[…]
PaLM API also comes with MakerSuite, a tool that allows developers to experiment with the model by trying different prompts to fine-tune the model’s output. These software services are available to a select few, however, for the moment: Google is gradually rolling them out.
The internet goliath promises that general users can look forward to eventually being able to automatically generate email drafts and replies, as well as summarize text. Images, audio, and video created using the AI engine will be available to add to Slides, whilst better autocomplete is coming to Sheets. New backgrounds and note-generating features are also coming to Meet.
Anthropic, a startup co-founded by ex-OpenAI employees, today launched something of a rival to the viral sensation ChatGPT.
Called Claude, Anthropic’s AI — a chatbot — can be instructed to perform a range of tasks, including searching across documents, summarizing, writing and coding, and answering questions about particular topics. In these ways, it’s similar to OpenAI’s ChatGPT. But Anthropic makes the case that Claude is “much less likely to produce harmful outputs,” “easier to converse with” and “more steerable.”
Organizations can request access. Pricing has yet to be detailed.
[…]
Following a closed beta late last year, Anthropic has been quietly testing Claude with launch partners, including Robin AI, AssemblyAI, Notion, Quora and DuckDuckGo. Two versions are available as of this morning via an API, Claude and a faster, less costly derivative called Claude Instant.
It’s also possible to get around Claude’s built-in safety features via clever prompting, as is the case with ChatGPT. One user in the beta was able to get Claude to describe how to make meth at home.
“The challenge is making models that both never hallucinate but are still useful — you can get into a tough situation where the model figures a good way to never lie is to never say anything at all, so there’s a tradeoff there that we’re working on,” the Anthropic spokesperson said. “We’ve also made progress on reducing hallucinations, but there is more to do.”
Anthropic’s other plans include letting developers customize Claude’s constitutional principles to their own needs. Customer acquisition is another focus, unsurprisingly — Anthropic sees its core users as “startups making bold technological bets” in addition to “larger, more established enterprises.”
[…]
The company has substantial outside backing, including a $580 million tranche from a group of investors including disgraced FTX founder Sam Bankman-Fried, Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn and the Center for Emerging Risk Research.
Most recently, Google pledged $300 million in Anthropic for a 10% stake in the startup. Under the terms of the deal, which was first reported by the Financial Times, Anthropic agreed to make Google Cloud its “preferred cloud provider” with the companies “co-develop[ing] AI computing systems.”
On Tuesday, the company unveiled GPT-4, an update to its advanced AI system that’s meant to generate natural-sounding language in response to user input. The company claimed GPT-4 is more accurate and more capable of solving problems. It even inferred that ChatGPT performs better than most humans can on complicated tests. OpenAI said GPT-4 scores in the 90th percentile of the Uniform Bar Exam and the 99th percentile of the Biology Olympiad. GPT-3, the company’s previous version, scored 10th and 31st on those tests, respectively.
The new system is now capable of handling over 25,000 words of text, according to the company. GPT-3 was only capable of handling 2,048 linguistic tokens, or 1,500 words at a time. This should allow for “more long-from content creation.” That’s not to say some folks haven’t tried writing entire novels with earlier versions of the LLM, but this new version could allow text to remain much more cohesive.
Those who have been hanging on OpenAI’s every word have been long anticipating the release of GPT-4, the latest edition of the company’s large language model. OpenAI said it spent six months modifying its LLM to make it 82% less likely to respond to requests for “disallowed content” and 40% more likely to produce factual responses than previous versions. Of course, we don’t have access to OpenAI’s internal data that might show how often GPT-3 was liable to lie or showcase banned content. Few people outside OpenAI have been able to take the new system on a test run, so all these claims could very well just be mere puffery.
Folks looking to get access to GPT-4 either has to be one of the select few companies given early access, or join a waitlist for the GPT-4 API or be one of the lucky few selected ChatGPT Plus subscribers.
The new system also includes the ability to accept images as inputs, allowing the system to generate captions, or provide analyses of an image. The company used the example of an image with a few ingredients, and the system provided some examples for what food those ingredients could create. OpenAI CEO Sam Altman wrote on Twitter that the company was “previewing” its visual inputs but it will “need some time to mitigate the safety challenges.”
What else is GPT-4 good at?
In a Tuesday livestream, OpenAI showed off a few capabilities of GPT-4, though the company constantly had to remind folks to not explicitly trust everything the AI produces.
In the livestream, OpenAI President Greg Brockman showed how the system can complete relatively inane tasks, like summarizing an article in one sentence where every word starts with the same letter. He then showed how users can instill the system with new information for it to parse, adding parameters to make the AI more aware of its role.
The company co-founder said the system is relatively slow, especially when completing complex tasks, though it wouldn’t take more than a few minutes to finish up requests. In one instance, Brockman made the AI create code for an AI-based Discord bot. He constantly iterated on the requests, even inputting error messages into GPT-4 until it managed to craft what was asked. He also put in U.S. tax code to finalize some tax info for an imaginary couple.
All the while, Brockman kept reiterating that people should not “run untrusted code from humans or AI,” and that people shouldn’t implicitly trust the AI to do their taxes. Of course, that won’t stop people from doing exactly that, depending on how capable public models of this AI end up being. It relates to the very real risk of running these AI models in professional settings, even when there’s only a small chance of AI error.
“It’s not perfect, but neither are you,” Brockman said.
OpenAI is getting even more companies hooked on AI
OpenAI has apparently leveraged its recently-announced multi-billion dollar arrangement with Microsoft to train GPT-4 on Microsoft Azure supercomputers. Altman said this latest version of the company’s LLM is “more creative than previous models, it hallucinates significantly less, and it is less biased.” Still, he said the company was inviting more outside groups to evaluate GPT-4 and offer feedback.
Of course, that’s not to say the system isn’t already been put into use by several companies. Language learning app Duolingo announced Tuesday afternoon that it was implementing a “Duolingo Max” premium subscription tier. The app has new features powered by GPT-4 that lets AI offer “context-specific explanations” for why users made a mistake. It also lets users practice conversations with the AI chatbot, meaning that damn annoying owl can now react to your language flubs in real time.
Because that’s what this is really about, getting more companies to pay to access OpenAI’s APIs. Altman mentioned the new system will have even more customization of behavior, which will further allow developers to fine-tune AI for specific purposes. Other customers of GPT-4 include the likes of Morgan Stanley, Khan Academy, and the Icelandic government. The U.S. Chamber of Commerce recently said in 10 years, virtually every company and government entity will be up on this AI tech.
Though the company still said GPT-4 has “many known limitations” including social biases, hallucinations, and adversarial prompts. Even if the new system is better than before, there’s still plenty of room for the AI to be abused. Some ChatGPT users have flooded open submission sections for at least one popular fiction magazine. Now that GPT-4 can write even longer, It’s likely we’ll see even more long-form AI-generated content flooding the internet.
OpenAI was supposed to be all about open source and stuff, but with this definitely being about increasing (paid) API access, it’s looking more and more like a massive money grab. Not really surprising but a real shame.
[…] The invention, by a University of Bristol physicist, who gave it the name “counterportation,” provides the first-ever practical blueprint for creating in the lab a wormhole that verifiably bridges space, as a probe into the inner workings of the universe.
By deploying a novel computing scheme, revealed in the journal Quantum Science and Technology, which harnesses the basic laws of physics, a small object can be reconstituted across space without any particles crossing. Among other things, it provides a “smoking gun” for the existence of a physical reality underpinning our most accurate description of the world.
[…]
Hatim said, “Here’s the sharp distinction. While counterportation achieves the end goal of teleportation, namely disembodied transport, it remarkably does so without any detectable information carriers traveling across.”
[…]
“If counterportation is to be realized, an entirely new type of quantum computer has to be built: an exchange-free one, where communicating parties exchange no particles,” Hatim said.
[…]
“The goal in the near future is to physically build such a wormwhole in the lab, which can then be used as a testbed for rival physical theories, even ones of quantum gravity,” Hatim added.
[…]
John Rarity, professor of optical communication systems at the University of Bristol, said, “We experience a classical world which is actually built from quantum objects. The proposed experiment can reveal this underlying quantum nature showing that entirely separate quantum particles can be correlated without ever interacting. This correlation at a distance can then be used to transport quantum information (qbits) from one location to another without a particle having to traverse the space, creating what could be called a traversable wormhole.”
More information: Hatim Salih, From counterportation to local wormholes, Quantum Science and Technology (2022). DOI: 10.1088/2058-9565/ac8ecd
Earlier this year, an amateur Go player decisively defeated one of the game’s top-ranked AI systems. They did so using a strategy developed with the help of a program researchers designed to probe systems like KataGo for weaknesses. It turns out that victory is just one part of a broader Go renaissance that is seeing human players become more creative since AlphaGO’s milestone victory in 2016
In a recent study published in the journal PNAS, researchers from the City University of Hong Kong and Yale found that human Go players have become less predictable in recent years. As the New Scientist explains, the researchers came to that conclusion by analyzing a dataset of more than 5.8 million Go moves made during professional play between 1950 and 2021. With the help of a “superhuman” Go AI, a program that can play the game and grade the quality of any single move, they created a statistic called a “decision quality index,” or DQI for short.
After assigning every move in their dataset a DQI score, the team found that before 2016, the quality of professional play improved relatively little from year to year. At most, the team saw a positive median annual DQI change of 0.2. In some years, the overall quality of play even dropped. However, since the rise of superhuman AIs in 2018, median DQI values have changed at a rate above 0.7. Over that same period, professional players have employed more novel strategies. In 2018, 88 percent of games, up from 63 percent in 2015, saw players set up a combination of plays that hadn’t been observed before.
“Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making,” the team writes.
That’s an interesting change, but not exactly an unintuitive one if you think about it. As professor Stuart Russel at the University of California, Berkeley told the New Scientist, “it’s not surprising that players who train against machines will tend to make more moves that machines approve of.”
[…] “The image was captured on a summer evening in São José dos Campos [in São Paulo state] while a negatively charged lightning bolt was nearing the ground at 370 km per second. When it was a few dozen meters from ground level, lightning rods and tall objects on the tops of nearby buildings produced positive upward discharges, competing to connect to the downward strike. The final image prior to the connection was obtained 25 thousandths of a second before the lightning hit one of the buildings,” Saba said.
He used a camera that takes 40,000 frames per second. When the video is played back in slow motion, it shows how lightning discharges behave and also how dangerous they can be if the protection system is not properly installed: Although there are more than 30 lightning rods in the vicinity, the strike connected not to them but to a smokestack on top of one of the buildings. “A flaw in the installation left the area unprotected. The impact of a 30,000-amp discharge did enormous damage,” he said.
[…]
Lightning strikes branch out as the electrical charges seek the path of least resistance, rather than the shortest path, which would be a straight line. The path of least resistance, usually a zigzag, is determined by different electrical characteristics of the atmosphere, which is not homogeneous. “A lightning strike made up of several discharges can last up to 2 seconds. However, each discharge lasts only fractions of milliseconds,” Saba said.
Lightning rods neither attract nor repel strikes, he added. Nor do they “discharge” clouds, as used to be believed. They simply offer lightning an easy and safe route to the ground.
Because it is not always possible to rely on the protection of a lightning rod, and most atmospheric discharges occur in summer in the tropics, it is worth considering Saba’s advice. “Storms are more frequent in the afternoon than in the morning, so be careful about outdoor activities on summer afternoons. Find shelter if you hear thunder, but never under a tree or pole, and never under a rickety roof,” he said.
“If you can’t find a safe place to shelter, stay in the car and wait for the storm to blow over. If no car or other shelter is available, squat down with your feet together. Don’t stand upright or lie flat. Indoors, avoid contact with appliances and fixed-line telephones.”
It is possible to survive being struck by lightning, and there are many examples. The odds increase if the person receives care quickly. “Cardiac arrest is the only cause of death. In this case, cardiopulmonary resuscitation is the recommended treatment,” Saba said.
Saba began systematically studying lightning with high-speed cameras in 2003, ever since building a collection of videos of lightning filmed at high speed that has become the world’s largest.
More information: Marcelo M. F. Saba et al, Close View of the Lightning Attachment Process Unveils the Streamer Zone Fine Structure, Geophysical Research Letters (2022). DOI: 10.1029/2022GL101482
DoNotPay, which describes itself as “the world’s first robot lawyer,” has been accused of practicing law without a license.
It’s facing a proposed class action lawsuit filed by Chicago-based law firm Edelson on March 3 and published Thursday on the website of the Superior Court of the State of California for the County of San Francisco.
The complaint argues: “Unfortunately for its customers, DoNotPay is not actually a robot, a lawyer, nor a law firm. DoNotPay does not have a law degree, is not barred in any jurisdiction, and is not supervised by any lawyer.”
The lawsuit was filed on behalf of Jonathan Faridian, who said he’d used DoNotPay to draft various legal documents including demand letters, a small claims court filing, and a job discrimination complaint.
[…]
Joshua Browder, the CEO of DoNotPay, said on Twitter that the claims had “no merit” and pledged to fight the lawsuit.
He said DoNotPay was “not going to be bullied by America’s richest class action lawyer” in a reference to Edelson founder Jay Edelson.
Browder said he’d been inspired to set up DoNotPay in 2015 to take on lawyers such as Edelson.
“Time and time again the only people that win are the lawyers. So I wanted to do something about it, building the DoNotPay robot lawyer to empower consumers to take on corporations on their own,” he said.
[…]
DoNotPay grabbed attention earlier this year after Browder said it planned to use its artificial intelligence chatbot to advise a defendant facing traffic court. This plan was postponed after Browder said said he’d received “threats from State Bar prosecutors” and feared a jail sentence.
Chronic pain patients were implanted with “dummy” pieces of plastic and told it would ease their pain, according to an indictment charging the former CEO of the firm that made the fake devices with fraud.
Laura Perryman, the former CEO of Stimwave LLC, was arrested in Florida on Thursday. According to an FBI press release, Perryman was indicted “in connection with a scheme to create and sell a non-functioning dummy medical device for implantation into patients suffering from chronic pain, resulting in millions of dollars in losses to federal healthcare programs.” According to the indictment, patients underwent unnecessary implanting procedures as a result of the fraud.
Perryman was charged with one count of conspiracy to commit wire fraud and health care fraud, and one count of healthcare fraud. Stimwave received FDA approval in 2014, according to Engadget, and was positioned as an alternative to opioids for pain relief.
[…]
The Stimwave “Pink Stylet” system consisted of an implantable electrode array for stimulating the target nerve, a battery worn externally that powered it, and a separate, 9-inch long implantable receiver. When doctors told Stimwave that the long receiver was difficult to place in some patients, Perryman allegedly created the “White Stylet,” a receiver that doctors could cut to be smaller and easier to implant—but was actually just a piece of plastic that did nothing.
“To perpetuate the lie that the White Stylet was functional, Perryman oversaw training that suggested to doctors that the White Stylet was a ‘receiver,’ when, in fact, it was made entirely of plastic, contained no copper, and therefore had no conductivity,” the FBI stated. “In addition, Perryman directed other Stimwave employees to vouch for the efficacy of the White Stylet, when she knew that the White Stylet was actually non-functional.”
Stimwave charged doctors and medical providers approximately $16,000 for the device, which medical insurance providers, including Medicare, would reimburse the doctors’ offices for.
[…]
“As a result of her illegal actions, not only did patients undergo unnecessary implanting procedures, but Medicare was defrauded of millions of dollars,” FBI Assistant Director Michael J. Driscoll said.
Cerebral has revealed it shared the private health information, including mental health assessments, of more than 3.1 million patients in the United States with advertisers and social media giants like Facebook, Google and TikTok.
The telehealth startup, which exploded in popularity during the COVID-19 pandemic after rolling lockdowns and a surge in online-only virtual health services, disclosed the security lapse [This is no security lapse! This is blatant greed served by peddling people’s personal information!] in a filing with the federal government that it shared patients’ personal and health information who used the app to search for therapy or other mental health care services.
Cerebral said that it collected and shared names, phone numbers, email addresses, dates of birth, IP addresses and other demographics, as well as data collected from Cerebral’s online mental health self-assessment, which may have also included the services that the patient selected, assessment responses and other associated health information.
If an individual created a Cerebral account, the information disclosed may have included name, phone number, email address, date of birth, IP address, Cerebral client ID number, and other demographic or information. If, in addition to creating a Cerebral account, an individual also completed any portion of Cerebral’s online mental health self-assessment, the information disclosed may also have included the service the individual selected, assessment responses, and certain associated health information.
If, in addition to creating a Cerebral account and completing Cerebral’s online mental health self-assessment, an individual also purchased a subscription plan from Cerebral, the information disclosed may also have included subscription plan type, appointment dates and other booking information, treatment, and other clinical information, health insurance/pharmacy benefit information (for example, plan name and group/member numbers), and insurance co-pay amount.
Cerebral was sharing patients’ data with tech giants in real-time by way of trackers and other data-collecting code that the startup embedded within its apps. Tech companies and advertisers, like Google, Facebook and TikTok, allow developers to include snippets of their custom-built code, which allows the developers to share information about their app users’ activity with the tech giants, often under the guise of analytics but also for advertising.
But users often have no idea that they are opting-in to this tracking simply by accepting the app’s terms of use and privacy policies, which many people don’t read.
Cerebral said in its notice to customers — buried at the bottom of its website — that the data collection and sharing has been going on since October 2019 when the startup was founded. The startup said it has removed the tracking code from its apps. While not mentioned, the tech giants are under no obligations to delete the data that Cerebral shared with them.
Because of how Cerebral handles confidential patient data, it’s covered under the U.S. health privacy law known as HIPAA. According to a list of health-related security lapses under investigation by the U.S. Department of Health and Human Services, which oversees and enforces HIPAA, Cerebral’s data lapse is the second-largest breach of health data in 2023.
Australian scientists have discovered an enzyme that converts air into energy. The finding, published today in the journal Nature, reveals that this enzyme uses the low amounts of the hydrogen in the atmosphere to create an electrical current. This finding opens the way to create devices that literally make energy from thin air.
The research team, led by Dr. Rhys Grinter, Ph.D. student Ashleigh Kropp, and Professor Chris Greening from the Monash University Biomedicine Discovery Institute in Melbourne, Australia, produced and analyzed a hydrogen-consuming enzyme from a common soil bacterium.
[…]
In this Nature paper, the researchers extracted the enzyme responsible for using atmospheric hydrogen from a bacterium called Mycobacterium smegmatis. They showed that this enzyme, called Huc, turns hydrogen gas into an electrical current. Dr. Grinter notes, “Huc is extraordinarily efficient. Unlike all other known enzymes and chemical catalysts, it even consumes hydrogen below atmospheric levels—as little as 0.00005% of the air we breathe.”
The researchers used several cutting-edge methods to reveal the molecular blueprint of atmospheric hydrogen oxidation. They used advanced microscopy (cryo-EM) to determine its atomic structure and electrical pathways, pushing boundaries to produce the most resolved enzyme structure reported by this method to date. They also used a technique called electrochemistry to demonstrate the purified enzyme creates electricity at minute hydrogen concentrations.
Laboratory work performed by Kropp shows that it is possible to store purified Huc for long periods. “It is astonishingly stable. It is possible to freeze the enzyme or heat it to 80 degrees celsius, and it retains its power to generate energy,” Kropp said. “This reflects that this enzyme helps bacteria to survive in the most extreme environments. ”
Huc is a “natural battery” that produces a sustained electrical current from air or added hydrogen. While this research is at an early stage, the discovery of Huc has considerable potential to develop small air-powered devices, for example as an alternative to solar-powered devices.
The bacteria that produce enzymes like Huc are common and can be grown in large quantities, meaning we have access to a sustainable source of the enzyme. Dr. Grinter says that a key objective for future work is to scale up Huc production. “Once we produce Huc in sufficient quantities, the sky is quite literally the limit for using it to produce clean energy.”
[…] Starting next week, the company will begin rolling out a public experiment that will augment Clyde, the built-in bot Discord employs to notify users of errors and respond to their slash commands, with conversational capabilities. Judging from the demo it showed off, Discord envisions people turning to Clyde for information they would have obtained from Google in the past. For instance, you might ask the chatbot for the local time in the place where someone on your server lives to decide if it would be appropriate to message them. You can invoke Clyde at any time, including in private conversations among your friends, by typing @Clyde.
Discord
Discord is quick to note Clyde is programmed not to bother you and your friends. Admins can also disable the chatbot if they don’t want to use the feature on their server. The first time you activate Clyde, Discord will display an opt-in prompt. For users worried about privacy, Anjney Midha, Discord’s head of platform ecosystem, told Engadget the company is not sharing user data with OpenAI to assist the startup in training its machine learning models.
Separate from Clyde, Discord is using OpenAI’s technology to enhance AutoMod, the automated content moderation tool the company introduced last June. As a refresher, server admins and moderators can configure AutoMod to automatically detect and block inappropriate messages before they’re posted by creating a list of words and phrases they don’t want to see. In the nine months since it began rolling out AutoMod, Discord says the feature has blocked more than 45 million unwanted messages.
Moving forward, the tool will use large language models to interpret and apply server rules. In practice, this should make AutoMod capable of spotting and taking action against people who attempt to go against a community’s norms and expectations. In one demo, Discord showed AutoMod taking action against someone who tried to skirt a server rule against self-promotion by writing their message in a different language. In that instance, AutoMod wasn’t preprogrammed to watch for a specific word or phrase, but it was able to use context to infer that there was a potential infraction.
[…]
Discord is also using OpenAI tech to power a feature that everyone should find useful: Conversation Summaries. If you’ve ever joined a large server only to immediately feel like you can’t keep up with some of its more active members, this feature promises to solve one of Discord’s longstanding pain points. When it arrives in a limited number of servers next week, the feature will begin creating bundles designed to provide you with an overview of chats you may have missed while away from the app. Each bundle will include a title, a summary of what was said and any images that were shared, as well as a log of who took part. You won’t need to endlessly scroll to try and piece together something you missed.
It can feel like Discord is just another tech firm up in the generative AI craze, but Midha wants users to know machine learning has been part of the Discord identity for a while. Every month, more than 30 million people use AI applications through the platform, and almost 3 million servers include at least one AI experience. On GitHub, many machine learning projects feature links to Discord servers, a fact Midha attributes to Discord being a natural place for those conversations to start.
[…] In the latest advance in nano- and micro-architected materials, engineers at Caltech have developed a new material made from numerous interconnected microscale knots.
The knots make the material far tougher than identically structured but unknotted materials: they absorb more energy and are able to deform more while still being able to return to their original shape undamaged. These new knotted materials may find applications in biomedicine as well as in aerospace applications due to their durability, possible biocompatibility, and extreme deformability.
[…]
Each knot is around 70 micrometers in height and width, and each fiber has a radius of around 1.7 micrometers (around one-hundredth the radius of a human hair). While these are not the smallest knots ever made—in 2017 chemists tied a knot made from an individual strand of atoms—this does represent the first time that a material composed of numerous knots at this scale has ever been created. Further, it demonstrates the potential value of including these nanoscale knots in a material—for example, for suturing or tethering in biomedicine.
The knotted materials, which were created out of polymers, exhibit a tensile toughness that far surpasses materials that are unknotted but otherwise structurally identical, including ones where individual strands are interwoven instead of knotted. When compared to their unknotted counterparts, the knotted materials absorb 92 percent more energy and require more than twice the amount of strain to snap when pulled.
The knots were not tied but rather manufactured in a knotted state by using advanced high-resolution 3D lithography capable of producing structures in the nanoscale. The samples detailed in the Science Advancespaper contain simple knots—an overhand knot with an extra twist that provides additional friction to absorb additional energy while the material is stretched. In the future, the team plans to explore materials constructed from more complex knots.
[…]
More information: Widianto P. Moestopo et al, Knots are not for naught: Design, properties, and topology of hierarchical intertwined microarchitected materials, Science Advances (2023). DOI: 10.1126/sciadv.ade6725
[…] An interdisciplinary team of scientists have released a complete reconstruction and analysis of a larval fruit fly’s brain, published Thursday in the journal Science. The resulting map, or connectome, as its called in neuroscience, includes each one of the 3,016 neurons and 548,000 of the synapses running between neurons that make up the baby fly’s entire central nervous system. The connectome includes both of the larva’s brain lobes, as well as the nerve cord.
[…]
It is the most complete insect brain map ever constructed and the most intricate entire connectome of any animal ever published. In short: it’s a game changer.
For some, it represents a paradigm shift in the field of neuroscience. Fruit flies are model organisms, and many neural structures and pathways are thought to be conserved across evolution. What’s true for maggots might well be true for other insects, mice, or even humans.
[…]
this new connectome is like going from a blurry satellite view to a crisp city street map. On the block-by-block array of an insect’s cortex, “now we know where every 7-11 and every, you know, Target [store] is,” Mosca said.
To complete the connectome, a group of Cambridge University scientists spent 12 years focusing in on the brain of a single, 6-hour-old female fruit fly larva. The organ, approximately 170 by 160 by 70 micrometers in size, is truly teeny—within an order of magnitude of things too small to see with a naked eye. Yet, the researchers were able to use electron microscopy to visually cut it into thousands of slices, only nanometers thick. Imaging just one of the neurons took a day, on average. From there, once the physical map of the neurons, or “brain volume,” was complete, the analysis began.
Along with computer scientists at Johns Hopkins University, the Cambridge neuroscientists assessed and categorized the neurons and synapses they’d found. The JHU researchers fine-tuned a computer program for this exact application in order to determine cell and synapse types, patterns within the brain connections, and to chart some function onto the larva connectome—based on previous neuroscience studies of behavior and sensory systems.
They found many surprises. For one, the larval fly connectome showed numerous neural pathways that zigzagged between hemispheres, demonstrating just how integrated both sides of the brain are and how nuanced signal processing can be, said Michael Winding, one of the study’s lead researchers and a Cambridge University neuroscientist, in a video call. “I never thought anything would look like that,” Winding said.
[…]
Fascinatingly, these recurrent structures mapped from an actual brain appear to closely match the architecture of some artificial intelligence models (called residual neural networks), with nested pathways allowing for different levels of complexity, Zlatic noted.
[…]
Not only was the revealed neural structure layered, but the neurons themselves appear to be multi-faceted. Sensory cells connected across modalities—visual, smell, and other inputs crossed and interacted en route to output cells, explained Zlatic. “This brain does a huge amount of multi-sensory integration…which is computationally a very powerful thing,” she added.
Then there were the types and relative quantities of cell-to-cell connections. In neuroscience, the classic “canonical” type of synapse runs from an axon to a dendrite. Yet, within the mapped larval fly brain, that only accounted for about two-thirds of the connections, Winding and Zlatic said. Axons linked to axons, dendrites to dendrites, and dendrites to axons. Scientists have known these sorts of connections exist in animal nervous systems, but the scope went far beyond what they expected. “Given the breadth of these connections, they must be important for brain computation,” Winding noted. We just don’t know exactly how.