Worm brain translated into a computer is taught tricks without programming

It is not much to look at: the nematode C. elegans is about one millimetre in length and is a very simple organism. But for science, it is extremely interesting. C. elegans is the only living being whose neural system has been analysed completely. It can be drawn as a circuit diagram or reproduced by computer software, so that the neural activity of the worm is simulated by a computer program.

Such an artificial C. elegans has now been trained at TU Wien (Vienna) to perform a remarkable trick: The computer worm has learned to balance a pole at the tip of its tail.
[…]
“With the help of reinforcement learning, a method also known as ‘learning based on experiment and reward’, the artificial reflex network was trained and optimized on the computer”, Mathias Lechner explains. And indeed, the team succeeded in teaching the virtual nerve system to balance a pole. “The result is a controller, which can solve a standard technology problem – stabilizing a pole, balanced on its tip. But no human being has written even one line of code for this controller, it just emerged by training a biological nerve system”, says Radu Grosu.

The team is going to explore the capabilities of such control-circuits further. The project raises the question, whether there is a fundamental difference between living nerve systems and computer code. Is machine learning and the activity of our brain the same on a fundamental level? At least we can be pretty sure that the simple nematode C. elegans does not care whether it lives as a worm in the ground or as a virtual worm on a computer hard drive.

Source: Technische Universität Wien : Dressierter Computerwurm lernt, einen Stab zu balancieren

Robot learns to mimic simple human motions

Researchers from the University of California, Berkeley, in the USA, have made some progress on this front by teaching code controlling a robot arm and hand to perform three tasks: grabbing an object and placing it in a specific position; pushing an object; and pushing and pulling an object after seeing the same action performed by a human arm.

Think picking up stuff, such as a toy, and placing it on a box, pushing a little car along a table, and so on.

The technique, described in a paper out this week, has been dubbed “one-shot imitation.” And, yes, it requires a lot of training before it can start copycatting people on demand. The idea is to educate the code to the point where it can immediately recognize movements, or similar movements, from its training, and replay them.

A few thousand videos depicting a human arm and a robot arm completing movements and actions are used to prime the control software. The same actions are repeated using different backgrounds, lighting effects, objects, and human arms to increase the depth of the machine-learning model’s awareness of how the limbs generally operate, and thus increase the chances of the robot successfully imitating a person on the fly.

Source: Is that you, T-1000? No, just a lil robot that can mimic humans on sight • The Register

Engineers design artificial synapse for “brain-on-a-chip” hardware

engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.
[…]
Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

Source: Engineers design artificial synapse for “brain-on-a-chip” hardware | MIT News

Revealing True Emotions Through Micro-Expressions: A Machine Learning Approach

Micro-expressions–involuntary, fleeting facial movements that reveal true emotions–hold valuable information for scenarios ranging from security interviews and interrogations to media analysis. They occur on various regions of the face, last only a fraction of a second, and are universal across cultures. In contrast to macro-expressions like big smiles and frowns, micro-expressions are extremely subtle and nearly impossible to suppress or fake. Because micro-expressions can reveal emotions people may be trying to hide, recognizing micro-expressions can aid DoD forensics and intelligence mission capabilities by providing clues to predict and intercept dangerous situations. This blog post, the latest highlighting research from the SEI Emerging Technology Center in machine emotional intelligence, describes our work on developing a prototype software tool to recognize micro-expressions in near real-time.

Source: Revealing True Emotions Through Micro-Expressions: A Machine Learning Approach

Facebook open sources Detectron, object detection framework in caffe2

Today, Facebook AI Research (FAIR) open sourced Detectron — our state-of-the-art platform for object detection research.

The Detectron project was started in July 2016 with the goal of creating a fast and flexible object detection system built on Caffe2, which was then in early alpha development. Over the last year and a half, the codebase has matured and supported a large number of our projects, including Mask R-CNN and Focal Loss for Dense Object Detection, which won the Marr Prize and Best Student Paper awards, respectively, at ICCV 2017. These algorithms, powered by Detectron, provide intuitive models for important computer vision tasks, such as instance segmentation, and have played a key role in the unprecedented advancement of visual perception systems that our community has achieved in recent years.

Source: Facebook open sources Detectron – Facebook Research

Active learning machine learns to create new quantum experiments

We present an autonomous learning model which learns to design such complex experiments, without relying on previous knowledge or often flawed intuition. Our system not only learns how to design desired experiments more efficiently than the best previous approaches, but in the process also discovers nontrivial experimental techniques. Our work demonstrates that learning machines can offer dramatic advances in how experiments are generated.
[…]
The artificial intelligence system learns to create a variety of entangled states and improves the efficiency of their realization. In the process, the system autonomously (re)discovers experimental techniques which are only now becoming standard in modern quantum optical experiments—a trait which was not explicitly demanded from the system but emerged through the process of learning. Such features highlight the possibility that machines could have a significantly more creative role in future research.

Source: Active learning machine learns to create new quantum experiments

The artificial agent develops new experiments by virtually placing mirrors, prisms or beam splitters on a virtual lab table. If its actions lead to a meaningful result, the agent has a higher chance of finding a similar sequence of actions in the future. This is known as a reinforcement learning strategy.

Read more at: https://phys.org/news/2018-01-artificial-agent-quantum.html#jCp

New AI System Predicts How Long Patients Will Live With Startling Accuracy

By using an artificially intelligent algorithm to predict patient mortality, a research team from Stanford University is hoping to improve the timing of end-of-life care for critically ill patients.

After parsing through 2 million records, the researchers identified 200,000 patients suitable for the project. The researchers were “agnostic” to disease type, disease stage, severity of admission (ICU versus non-ICU), and so on. All of these patients had associated case reports, including a diagnosis, the number of scans ordered, the types of procedures performed, the number of days spent in the hospital, medicines used, and other factors.

The deep learning algorithm studied the case reports from 160,000 of these patients, and was given the directive: “Given a patient and a date, predict the mortality of that patient within 12 months from that date, using EHR data of that patient from the prior year.” The system was trained to predict patient mortality within the next three to 12 months. Patients with less than three months of lifespan weren’t considered, as that would leave insufficient time for palliative care preparations.

Armed with its new skills, the algorithm was tasked with assessing the remaining 40,000 patients. It did quite well, successfully predicting patient mortality within the 3 to 12 month timespan in nine out of 10 cases. Around 95 percent of patients who were assessed with a low probability of dying within that period lived beyond 12 months. The pilot study proved successful, and the researchers are now hoping their system will be applied more broadly.

Source: New AI System Predicts How Long Patients Will Live With Startling Accuracy

When It Comes to Gorillas, Google Photos Remains Blind – it’s hard to take an AI to account

In a third test attempting to assess Google Photos’ view of people, WIRED also uploaded a collection of more than 10,000 images used in facial-recognition research. The search term “African american” turned up only an image of grazing antelope. Typing “black man,” “black woman,” or “black person,” caused Google’s system to return black-and-white images of people, correctly sorted by gender, but not filtered by race. The only search terms with results that appeared to select for people with darker skin tones were “afro” and “African,” although results were mixed.

A Google spokesperson confirmed that “gorilla” was censored from searches and image tags after the 2015 incident, and that “chimp,” “chimpanzee,” and “monkey” are also blocked today. “Image labeling technology is still early and unfortunately it’s nowhere near perfect,” the spokesperson wrote in an email, highlighting a feature of Google Photos that allows users to report mistakes.

Google’s caution around images of gorillas illustrates a shortcoming of existing machine-learning technology. With enough data and computing power, software can be trained to categorize images or transcribe speech to a high level of accuracy. But it can’t easily go beyond the experience of that training. And even the very best algorithms lack the ability to use common sense, or abstract concepts, to refine their interpretation of the world as humans do.

Source: When It Comes to Gorillas, Google Photos Remains Blind | WIRED

Boffins tweak audio by 0.1% to fool speech recognition engines

a paper by Nicholas Carlini and David Wagner of the University of California Berkeley has explained off a technique to trick speech recognition by changing the source waveform by 0.1 per cent.

The pair wrote at arXiv that their attack achieved a first: not merely an attack that made a speech recognition SR engine fail, but one that returned a result chosen by the attacker.In other words, because the attack waveform is 99.9 per cent identical to the original, a human wouldn’t notice what’s wrong with a recording of “it was the best of times, it was the worst of times”, but an AI could be tricked into transcribing it as something else entirely: the authors say it could produce “it is a truth universally acknowledged that a single” from a slightly-altered sample.

It works every single time: the pair claimed a 100 per cent success rate for their attack, and frighteningly, an attacker can even hide a target waveform in what (to the observer) appears to be silence.

Source: Boffins tweak audio by 0.1% to fool speech recognition engines • The Register

This Ex-NSA Hacker Is Building an AI to Find Hate Symbols on Twitter

NEMESIS, according to Crose, can help spot symbols that have been co-opted by hate groups to signal to each other in plain sight. At a glance, the way NEMESIS works is relatively simple. There’s an “inference graph,” which is a mathematical representation of trained images, classified as Nazi or white supremacist symbols. This inference graph trains the system with machine learning to identify the symbols in the wild, whether they are in pictures or videos.

Source: This Ex-NSA Hacker Is Building an AI to Find Hate Symbols on Twitter – Motherboard

AI System Sorts News Articles By Whether or Not They Contain Actual Information

In a recent paper published in the Journal of Artificial Intelligence Research, computer scientists Ani Nenkova and Yinfei Yang, of Google and the University of Pennsylvania, respectively, describe a new machine learning approach to classifying written journalism according to a formalized idea of “content density.” With an average accuracy of around 80 percent, their system was able to accurately classify news stories across a wide range of domains, spanning from international relations and business to sports and science journalism, when evaluated against a ground truth dataset of already correctly classified news articles.

Source: AI System Sorts News Articles By Whether or Not They Contain Actual Information – Motherboard

Google’s voice-generating AI is now indistinguishable from humans

A research paper published by Google this month—which has not been peer reviewed—details a text-to-speech system called Tacotron 2, which claims near-human accuracy at imitating audio of a person speaking from text.

The system is Google’s second official generation of the technology, which consists of two deep neural networks. The first network translates the text into a spectrogram (pdf), a visual way to represent audio frequencies over time. That spectrogram is then fed into WaveNet, a system from Alphabet’s AI research lab DeepMind, which reads the chart and generates the corresponding audio elements accordingly.

Source: Google’s voice-generating AI is now indistinguishable from humans — Quartz

Project Maven brings AI to the fight against ISIS

For years, the Defense Department’s most senior leadership has lamented the fact that US military and spy agencies, where artificial intelligence (AI) technology is concerned, lag far behind state-of-the-art commercial technology. Though US companies and universities lead the world in advanced AI research and commercialization, the US military still performs many activities in a style that would be familiar to the military of World War II.

As of this month, however, that has begun to change. Project Maven is a crash Defense Department program that was designed to deliver AI technologies—specifically, technologies that involve deep learning neural networks—to an active combat theater within six months from when the project received funding. Most defense acquisition programs take years or even decades to reach the battlefield, but technologies developed through Project Maven have already been successfully deployed in the fight against ISIS. Despite their rapid development and deployment, these technologies are getting strong praise from their military intelligence users. For the US national security community, Project Maven’s frankly incredible success foreshadows enormous opportunities ahead—as well as enormous organizational, ethical, and strategic challenges.
[…]
As its AI beachhead, the department chose Project Maven, which focuses on analysis of full-motion video data from tactical aerial drone platforms such as the ScanEagle and medium-altitude platforms such as the MQ-1C Gray Eagle and the MQ-9 Reaper. These drone platforms and their full-motion video sensors play a major role in the conflict against ISIS across the globe. The tactical and medium-altitude video sensors of the Scan Eagle, MQ-1C, and MQ-9 produce imagery that more or less resembles what you see on Google Earth. A single drone with these sensors produces many terabytes of data every day. Before AI was incorporated into analysis of this data, it took a team of analysts working 24 hours a day to exploit only a fraction of one drone’s sensor data.
[…]
Now that Project Maven has met the sky-high expectations of the department’s former second-ranking official, its success will likely spawn a hundred copycats throughout the military and intelligence community. The department must ensure that these copycats actually replicate Project Maven’s secret sauce—which is not merely its focus on AI technology. The project’s success was enabled by its organizational structure: a small, operationally focused, cross-functional team that was empowered to develop external partnerships, leverage existing infrastructure and platforms, and engage with user communities iteratively during development. AI needs to be woven throughout the fabric of the Defense Department, and many existing department institutions will have to adopt project management structures similar to Maven’s if they are to run effective AI acquisition programs.
[…]
To its credit, the department selected drone video footage as an AI beachhead because it wanted to avoid some of the more thorny ethical and strategic challenges associated with automation in warfare. As US military and intelligence agencies implement modern AI technology across a much more diverse set of missions, they will face wrenching strategic, ethical, and legal challenges—which Project Maven’s narrow focus helped it avoid.

Source: Project Maven brings AI to the fight against ISIS | Bulletin of the Atomic Scientists

Canada to use AI to Study ‘Suicide-Related Behavior’ on Social Media

his month the Canadian government is launching a pilot program to research and predict suicide rates in the country using artificial intelligence. The pilot will mine Canadians’ social media posts “in order to identify patterns associated with users who discuss suicide-related behavior,” according to a recently published contract document.

Source: Canada Is Using AI to Study ‘Suicide-Related Behavior’ on Social Media

Using stickers in the field of view to fool image recognition AIs

In a research paper presented in December through a workshop at the 31st Conference on Neural Information Processing Systems (NIPS 2017) and made available last week through ArXiv, a team of researchers from Google discuss a technique for creating an adversarial patch.

This patch, sticker, or cutout consists of a psychedelic graphic which, when placed next to an object like a banana, makes image recognition software see something entirely different, such as a toaster.
[…]
“We construct an attack that does not attempt to subtly transform an existing item into another,” the researchers explain. “Instead, this attack generates an image-independent patch that is extremely salient to a neural network. This patch can then be placed anywhere within the field of view of the classifier, and causes the classifier to output a targeted class.”

The boffins observe that because the patch is separate from the scene, it allows attacks on image recognition systems without concern for lighting conditions, camera angles, the type of classifier being attacked, or other objects present in the scene.

While the ruse recalls schemes to trick face scanning systems with geometric makeup patterns, it doesn’t involve altering the salient object in the scene. The addition of the adversarial patch to the scene is enough to confuse the image classification code.

Source: Now that’s sticker shock: Sticky labels make image-recog AI go bananas for toasters • The Register

Another AI attack, this time against ‘black box’ machine learning

Unlike adversarial models that attack AIs “from the inside”, attacks developed for black boxes could be used against closed system like autonomous cars, security (facial recognition, for example), or speech recognition (Alexa or Cortana).The tool, called Foolbox, is currently under review for presentation at next year’s International Conference on Learning Representations (kicking off at the end of April).Wieland Brendel, Jonas Rauber and Matthias Bethge of the Eberhard Karls University Tubingen, Germany explained at arXiv that Foolbox is a “decision-based” attack called a boundary attack which “starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial”.

“Its basic operating principle – starting from a large perturbation and successively reducing it – inverts the logic of essentially all previous adversarial attacks. Besides being surprisingly simple, the boundary attack is also extremely flexible”, they wrote.

For example, “transfer-based attacks” have to be tested against the same training data as the models they’re attacking, and need “cumbersome substitute models”.

Gradient-based attacks, the paper claimed, also need detailed knowledge about the target model, while score-based attacks need access to the target model’s confidence scores.

The boundary attack, the paper said, only needs to see the final decision of a machine learning model – the class label it applies to an input, for example, or in a speech recognition model, the transcribed sentence.

Source: Another AI attack, this time against ‘black box’ machine learning • The Register

KLM uses AI to answer questions on social media

olgens KLM worden wekelijks 30.000 gesprekken gevoerd door de 250 socialmediamedewerkers. De luchtvaartmaatschappij wordt wekelijks ruim 130.000 keer genoemd op social media. Gemiddeld bestaat een gesprek tussen KLM en een klant uit vijf tot zes vragen en antwoorden. De veelgestelde vragen die met behulp van kunstmatige intelligentie automatisch kunnen worden beantwoord, worden meestal aan het begin van het gesprek gesteld.

Source: KLM laat kunstmatige intelligentie direct vragen beantwoorden op social – Emerce

China’s big brother: how artificial intelligence is catching criminals and advancing health care

“Our machines can very easily recognise you among at least 2 billion people in a matter of seconds,” says chief executive and Yitu co-founder Zhu Long, “which would have been unbelievable just three years ago.” Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algor­ithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.

According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.
[…]
“Chinese authorities are collecting and centralising ever more information about hundreds of millions of ordinary people, identifying persons who deviate from what they determine to be ‘normal thought’ and then surveilling them,” says Sophie Richardson, China director at HRW. The activist calls on Beijing to cease the collection of big data “until China has meaningful privacy rights and an accountable police force”.

Source: China’s big brother: how artificial intelligence is catching criminals and advancing health care | Post Magazine | South China Morning Post

AI helps find planets in other solar systems

The neural network is trained on 15,000 signals from the Kepler dataset that have been previously verified as planets or non-planets. A smaller test set with new, unseen data was fed to the neural network and it correctly identified true planets from false positives to an accuracy of about 96 per cent.The researchers then applied this model to weaker signals from 670 star systems, where scientists had already found multiple known planets to try and find any that might have been missed.Vanderburg said the got lots of false positives of planets, but also more potential real ones too. “It’s like sifting through rocks to find jewels. If you have a finer sieve then you will catch more rocks but you might catch more jewels, as well,” he said.

Source: Sigh. It’s not quite Star Trek’s Data, but it’ll do: AI helps boffins clock second Solar System • The Register

Google Taught an AI to Make Sense of the Human Genome

This week, Google released a tool called DeepVariant that uses deep learning to piece together a person’s genome and more accurately identify mutations in a DNA sequence.Built on the back of the same technology that allows Google to identify whether a photo is of a cat or dog, DeepVariant solves an important problem in the world of DNA analysis. Modern DNA sequencers perform what’s known as high-throughput sequencing, returning not one long read out of a full DNA sequence but short snippets that overlap. Those snippets are then compared against another genome to help piece it together and identify variations. But the technology is error-prone, and it can be difficult for scientists to distinguish between those errors and small mutations. And small mutation matter. They could provide significant insight into, say, the root cause of a disease. Distinguishing which base pairs are the result of error and which are for real is called “variant calling.”

Source: Google Taught an AI to Make Sense of the Human Genome

AI in storytelling: Machines as cocreators

Sunspring debuted at the SCI-FI LONDON film festival in 2016. Set in a dystopian world with mass unemployment, the movie attracted many fans, with one viewer describing it as amusing but strange. But the most notable aspect of the film involves its creation: an artificial-intelligence (AI) bot wrote Sunspring’s screenplay.

ome researchers have already used machine learning to identify emotional arcs in stories. One method, developed at the University of Vermont, involved having computers scan text—video scripts or book content—to construct arcs.

We decided to go a step further. Working as part of a broader collaboration between MIT’s Lab for Social Machines and McKinsey’s Consumer Tech and Media team, we developed machine-learning models that rely on deep neural networks to “watch” small slices of video—movies, TV, and short online features—and estimate their positive or negative emotional content by the second.

These models consider all aspects of a video—not just the plot, characters, and dialogue but also more subtle touches, like a close-up of a person’s face or a snippet of music that plays during a car-chase scene. When the content of each slice is considered in total, the story’s emotional arc emerges.

Source: AI in storytelling: Machines as cocreators | McKinsey & Company

AI-Assisted Fake Porn Is Here and We’re All Fucked – Motherboard

There’s a video of Gal Gadot having sex with her stepbrother on the internet. But it’s not really Gadot’s body, and it’s barely her own face. It’s an approximation, face-swapped to look like she’s performing in an existing incest-themed porn video.The video was created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.

It’s not going to fool anyone who looks closely. Sometimes the face doesn’t track correctly and there’s an uncanny valley effect at play, but at a glance it seems believable. It’s especially striking considering that it’s allegedly the work of one person—a Redditor who goes by the name ‘deepfakes’—not a big special effects studio that can digitally recreate a young Princess Leia in Rogue One using CGI. Instead, deepfakes uses open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning.

Like the Adobe tool that can make people say anything, and the Face2Face algorithm that can swap a recorded video with real-time face tracking, this new type of fake porn shows that we’re on the verge of living in a world where it’s trivially easy to fabricate believable videos of people doing and saying things they never did. Even having sex.

So far, deepfakes has posted hardcore porn videos featuring the faces of Scarlett Johansson, Maisie Williams, Taylor Swift, Aubrey Plaza, and Gal Gadot on Reddit. I’ve reached out to the management companies and/or publicists who represent each of these actors informing them of the fake videos, and will update if I hear back.

Source: AI-Assisted Fake Porn Is Here and We’re All Fucked – Motherboard

DeepMind’s AI became a superhuman chess (and shogi and go) player in a few hours using generic reinforcement learning

In the paper, DeepMind describes how a descendant of the AI program that first conquered the board game Go has taught itself to play a number of other games at a superhuman level. After eight hours of self-play, the program bested the AI that first beat the human world Go champion; and after four hours of training, it beat the current world champion chess-playing program, Stockfish. Then for a victory lap, it trained for just two hours and polished off one of the world’s best shogi-playing programs named Elmo (shogi being a Japanese version of chess that’s played on a bigger board).

One of the key advances here is that the new AI program, named AlphaZero, wasn’t specifically designed to play any of these games. In each case, it was given some basic rules (like how knights move in chess, and so on) but was programmed with no other strategies or tactics. It simply got better by playing itself over and over again at an accelerated pace — a method of training AI known as “reinforcement learning.

”Using reinforcement learning in this way isn’t new in and of itself. DeepMind’s engineers used the same method to create AlphaGo Zero; the AI program that was unveiled this October. But, as this week’s paper describes, the new AlphaZero is a “more generic version” of the same software, meaning it can be applied to a broader range of tasks without being primed beforehand.What’s remarkable here is that in less than 24 hours, the same computer program was able to teach itself how to play three complex board games at superhuman levels. That’s a new feat for the world of AI.

Source: DeepMind’s AI became a superhuman chess player in a few hours – The Verge

This frostbitten black metal album was created by an artificial intelligence

Coditany of Timeness” is a convincing lo-fi black metal album, complete with atmospheric interludes, tremolo guitar, frantic blast beats and screeching vocals. But the record, which you can listen to on Bandcamp, wasn’t created by musicians.Instead, it was generated by two musical technologists using a deep learning software that ingests a musical album, processes it, and spits out an imitation of its style.To create Coditany, the software broke “Diotima,” a 2011 album by a New York black metal band called Krallice, into small segments of audio. Then they fed each segment through a neural network — a type of artificial intelligence modeled loosely on a biological brain — and asked it to guess what the waveform of the next individual sample of audio would be. If the guess was right, the network would strengthen the paths of the neural network that led to the correct answer, similar to the way electrical connections between neurons in our brain strengthen as we learn new skills.At first the network just produced washes of textured noise. “Early in its training, the kinds of sounds it produces are very noisy and grotesque and textural,” said CJ Carr, one of the creators of the algorithm. But as it moved through guesses — as many as five million over the course of three days — the network started to sound a lot like Krallice. “As it improves its training, you start hearing elements of the original music it was trained on come through more and more.”As someone who used to listen to lo-fi black metal, I found Coditany of Timeness not only convincing — it sounds like a real human band — but even potentially enjoyable. The neural network managed to capture the genre’s penchant for long intros broken by frantic drums and distorted vocals. The software’s take on Krallice, which its creators filled out with song titles and album art that were also algorithmically generated, might not garner a glowing review on Pitchfork, but it’s strikingly effective at capturing the aesthetic. If I didn’t know it was generated by an algorithm, I’m not sure I’d be able to tell the difference.

Source: This frostbitten black metal album was created by an artificial intelligence | The Outline

Announcing the Initial Release of Mozilla’s Open Source Speech Recognition Model and Voice Dataset

I’m excited to announce the initial release of Mozilla’s open source speech recognition model that has an accuracy approaching what humans can perceive when listening to the same recordings. We are also releasing the world’s second largest publicly available voice dataset, which was contributed to by nearly 20,000 people globally.
[…]
This is why we started DeepSpeech as an open source project. Together with a community of likeminded developers, companies and researchers, we have applied sophisticated machine learning techniques and a variety of innovations to build a speech-to-text engine that has a word error rate of just 6.5% on LibriSpeech’s test-clean dataset.

In our initial release today, we have included pre-built packages for Python, NodeJS and a command-line binary that developers can use right away to experiment with speech recognition.
[…]
Our aim is to make it easy for people to donate their voices to a publicly available database, and in doing so build a voice dataset that everyone can use to train new voice-enabled applications.

Today, we’ve released the first tranche of donated voices: nearly 400,000 recordings, representing 500 hours of speech. Anyone can download this data.
[…]
To this end, while we’ve started with English, we are working hard to ensure that Common Voice will support voice donations in multiple languages beginning in the first half of 2018.

Finally, as we have experienced the challenge of finding publicly available voice datasets, alongside the Common Voice data we have also compiled links to download all the other large voice collections we know about.

Source: Announcing the Initial Release of Mozilla’s Open Source Speech Recognition Model and Voice Dataset – The Mozilla Blog