Artificial intelligence: Commission kicks off work on marrying cutting-edge technology and ethical standards

The Commission is setting up a group on artificial intelligence to gather expert input and rally a broad alliance of diverse stakeholders.

The expert group will also draw up a proposal for guidelines on AI ethics, building on today’s statement by the European Group on Ethics in Science and New Technologies.

From better healthcare to safer transport and more sustainable farming, artificial intelligence (AI) can bring major benefits to our society and economy. And yet, questions related to the impact of AI on the future of work and existing legislation are raised. This calls for a wide, open and inclusive discussion on how to use and develop artificial intelligence both successfully and ethically sound.
[…]
Today the Commission has opened applications to join an expert group in artificial intelligence which will be tasked to:

advise the Commission on how to build a broad and diverse community of stakeholders in a “European AI Alliance”;
support the implementation of the upcoming European initiative on artificial intelligence (April 2018);
come forward by the end of the year with draft guidelines for the ethical development and use of artificial intelligence based on the EU’s fundamental rights. In doing so, it will consider issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights. The guidelines will be drafted following a wide consultation and building on today’s statement by the European Group on Ethics in Science and New Technologies (EGE), an independent advisory body to the European Commission.

Source: European Commission – PRESS RELEASES – Press release – Artificial intelligence: Commission kicks off work on marrying cutting-edge technology and ethical standards

The “Black Mirror” scenarios that are leading some experts to call for more secrecy on AI – MIT Technology Review

a new report by more than 20 researchers from the Universities of Oxford and Cambridge, OpenAI, and the Electronic Frontier Foundation warns that the same technology creates new opportunities for criminals, political operatives, and oppressive governments—so much so that some AI research may need to be kept secret.

Included in the report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, are four dystopian vignettes involving artificial intelligence that seem taken straight out of the Netflix science fiction show Black Mirror.

Source: The “Black Mirror” scenarios that are leading some experts to call for more secrecy on AI – MIT Technology Review

This is completely ridiculous. The knowledge is out there and if not, will be stolen. In that case, if you don’t know about potential attack vectors, you are completely defenseless against them and so are security firms trying to help you.

Besides this, basing security on Movie Plots you can think up (and I’m pretty sure any reader can think up loads more, quite easily!) doesn’t work, because then you are vulnerable to any of the movie plots the other thought up and you didn’t.

Good security is basic and intrinsic. AI / ML is here and we need a solid discussion in our societies as to how we want it to impact us, instead of all this cold war fear mongering.

IBM Watson to generate sales solutions

“We’ve trained Watson on our standard solutions and offerings, plus all the prior solutions IBM has designed for large enterprises,” the corporate files state. “This means we can review a client’s RFP [request for proposal] and come up with a new proposed architecture and technical solution design for a state of the art system that can run enterprise businesses at scale.” Proposed solutions will be delivered “in minutes,” it is claimed.
[…]
IBM is not leaving all the work to Watson: a document we’ve seen also details “strong governance processes to ensure high quality solutions are delivered globally.”

Big Blue’s explanation for cognitive, er, solutioning’s role is that it will be “greatly aiding the work of the Technical Solutions Managers” rather than replacing them.

Source: If you don’t like what IBM is pitching, blame Watson: It’s generating sales ‘solutions’ now • The Register

Missing data hinder replication of artificial intelligence studies

Last year, computer scientists at the University of Montreal (U of M) in Canada were eager to show off a new speech recognition algorithm, and they wanted to compare it to a benchmark, an algorithm from a well-known scientist. The only problem: The benchmark’s source code wasn’t published. The researchers had to recreate it from the published description. But they couldn’t get their version to match the benchmark’s claimed performance, says Nan Rosemary Ke, a Ph.D. student in the U of M lab. “We tried for 2 months and we couldn’t get anywhere close.”
[…]
The most basic problem is that researchers often don’t share their source code. At the AAAI meeting, Odd Erik Gundersen, a computer scientist at the Norwegian University of Science and Technology in Trondheim, reported the results of a survey of 400 algorithms presented in papers at two top AI conferences in the past few years. He found that only 6% of the presenters shared the algorithm’s code. Only a third shared the data they tested their algorithms on, and just half shared “pseudocode”—a limited summary of an algorithm. (In many cases, code is also absent from AI papers published in journals, including Science and Nature.)
[…]
Assuming you can get and run the original code, it still might not do what you expect. In the area of AI called machine learning, in which computers derive expertise from experience, the training data for an algorithm can influence its performance. Ke suspects that not knowing the training for the speech-recognition benchmark was what tripped up her group. “There’s randomness from one run to another,” she says. You can get “really, really lucky and have one run with a really good number,” she adds. “That’s usually what people report.”
[…]
Henderson’s experiment was conducted in a test bed for reinforcement learning algorithms called Gym, created by OpenAI, a nonprofit based in San Francisco, California. John Schulman, a computer scientist at OpenAI who helped create Gym, says that it helps standardize experiments. “Before Gym, a lot of people were working on reinforcement learning, but everyone kind of cooked up their own environments for their experiments, and that made it hard to compare results across papers,” he says.

IBM Research presented another tool at the AAAI meeting to aid replication: a system for recreating unpublished source code automatically, saving researchers days or weeks of effort. It’s a neural network—a machine learning algorithm made of layers of small computational units, analogous to neurons—that is designed to recreate other neural networks. It scans an AI research paper looking for a chart or diagram describing a neural net, parses those data into layers and connections, and generates the network in new code. The tool has now reproduced hundreds of published neural networks, and IBM is planning to make them available in an open, online repository.

Source: Missing data hinder replication of artificial intelligence studies | Science | AAAS

New AI model fills in blank spots in photos

The technology was developed by a team led by Hiroshi Ishikawa, a professor at Japan’s Waseda University. It uses convolutional neural networks, a type of deep learning, to predict missing parts of images. The technology could be used in photo-editing apps. It can also be used to generate 3-D images from real 2-D images.

The team at first prepared some 8 million images of real landscapes, human faces and other subjects. Using special software, the team generated numerous versions for each image, randomly adding artificial blanks of various shapes, sizes and positions. With all the data, the model took three months to learn how to predict the blanks so that it could fill them in and make the resultant images look identical to the originals.

The model’s learning algorithm first predicts and fills in blanks. It then evaluates how consistent the added part is with its surroundings.

Source: New AI model fills in blank spots in photos- Nikkei Asian Review

Moth brain uploaded to computer, taught to recognise numbers

MothNet’s computer code, according to the boffins, contains layers of artificial neurons to simulate the bug’s antenna lobe and mushroom body, which are common parts of insect brains.

Crucially, instead of recognizing smells, the duo taught MothNet to identify handwritten digits in the MNIST dataset. This database is often used to train and test pattern recognition in computer vision applications.

The academics used supervised learning to train MothNet, feeding it about 15 to 20 images of each digit from zero to nine, and rewarding it when it recognized the numbers correctly.

Receptor neurons in the artificial brain processed the incoming images, and passed the information down to the antenna lobe, which learned the features of each number. This lobe was connected, by a set of projection neurons, to the sparse mushroom body. This section was wired up to extrinsic neurons, each ultimately representing an individual integer between zero and nine.
[…]
MothNet achieved 75 per cent to 85 per cent accuracy, the paper stated, despite relatively few training examples, seemingly outperforming more traditional neural networks when given the same amount of training data.
[…]
It shows that the simplest biological neural network of an insect brain can be taught simple image recognition tasks, and potentially exceed other models when training examples and processing resources are scarce. The researchers believe that these biological neural networks (BNNs) can be “combined and stacked into larger, deeper neural nets.”

Source: Roses are red, are you single, we wonder? ‘Cos this moth-brain AI can read your phone number • The Register

Look out, Wiki-geeks. Now Google trains AI to write Wikipedia articles

A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.

A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.

However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.
[…]
The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article. Most of the selected pages are used for training, and a few are kept back to develop and test the system.

The paragraphs from each page are ranked and the text from all the pages are added to create one long document. The text is encoded and shortened, by splitting it into 32,000 individual words and used as input.

This is then fed into an abstractive model, where the long sentences in the input are cut shorter. It’s a clever trick used to both create and summarize text. The generated sentences are taken from the earlier extraction phase and aren’t built from scratch, which explains why the structure is pretty repetitive and stiff.

Mohammad Saleh, co-author of the paper and a software engineer in Google AI’s team, told The Register: “The extraction phase is a bottleneck that determines which parts of the input will be fed to the abstraction stage. Ideally, we would like to pass all the input from reference documents.

“Designing models and hardware that can support longer input sequences is currently an active area of research that can alleviate these limitations.”

We are still a very long way off from effective text summarization or generation. And while the Google Brain project is rather interesting, it would probably be unwise to use a system like this to automatically generate Wikipedia entries. For now, anyway.

Source: Look out, Wiki-geeks. Now Google trains AI to write Wikipedia articles • The Register

Gfycat Uses Artificial Intelligence to Fight Deepfakes Porn

Gfycat says it’s figured out a way to train an artificial intelligence to spot fraudulent videos. The technology builds on a number of tools Gfycat already used to index the GIFs on its platform.
[..]
Gfycat’s AI approach leverages two tools it already developed, both (of course) named after felines: Project Angora and Project Maru. When a user uploads a low-quality GIF of, say, Taylor Swift to Gfycat, Project Angora can search the web for a higher-res version to replace it with. In other words, it can find the same clip of Swift singing “Shake It Off” and upload a nicer version.

Now let’s say you don’t tag your clip “Taylor Swift.” Not a problem. Project Maru can purportedly differentiate between individual faces and will automatically tag the GIF with Swift’s name. This makes sense from Gfycat’s perspective—it wants to index the millions of clips users upload to the platform monthly.

Here’s where deepfakes come in. Created by amateurs, most deepfakes aren’t entirely believable. If you look closely, the frames don’t quite match up; in the below clip, Donald Trump’s face doesn’t completely cover Angela Merkel’s throughout. Your brain does some of the work, filling in the gaps where the technology failed to turn one person’s face into another.

Project Maru is not nearly as forgiving as the human brain. When Gfycat’s engineers ran deepfakes through its AI tool, it would register that a clip resembled, say, Nicolas Cage, but not enough to issue a positive match, because the face isn’t rendered perfectly in every frame. Using Maru is one way that Gfycat can spot a deepfake—it smells a rat when a GIF only partially resembles a celebrity.

Source: Gfycat Uses Artificial Intelligence to Fight Deepfakes Porn | WIRED

Facial Recognition Is Accurate, if You’re a White Guy

Facial recognition technology is improving by leaps and bounds. Some commercial software can now tell the gender of a person in a photograph.

When the person in the photo is a white man, the software is right 99 percent of the time.

But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.

These disparate results, calculated by Joy Buolamwini, a researcher at the M.I.T. Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.
[…]
One widely used facial-recognition data set was estimated to be more than 75 percent male and more than 80 percent white, according to another research study.

The new study also raises broader questions of fairness and accountability in artificial intelligence at a time when investment in and adoption of the technology is racing ahead.
[…]
The African and Nordic faces were scored according to a six-point labeling system used by dermatologists to classify skin types. The medical classifications were determined to be more objective and precise than race.

Then, each company’s software was tested on the curated data, crafted for gender balance and a range of skin tones. The results varied somewhat. Microsoft’s error rate for darker-skinned women was 21 percent, while IBM’s and Megvii’s rates were nearly 35 percent. They all had error rates below 1 percent for light-skinned males.

Source: Facial Recognition Is Accurate, if You’re a White Guy – The New York Times

Worm brain translated into a computer is taught tricks without programming

It is not much to look at: the nematode C. elegans is about one millimetre in length and is a very simple organism. But for science, it is extremely interesting. C. elegans is the only living being whose neural system has been analysed completely. It can be drawn as a circuit diagram or reproduced by computer software, so that the neural activity of the worm is simulated by a computer program.

Such an artificial C. elegans has now been trained at TU Wien (Vienna) to perform a remarkable trick: The computer worm has learned to balance a pole at the tip of its tail.
[…]
“With the help of reinforcement learning, a method also known as ‘learning based on experiment and reward’, the artificial reflex network was trained and optimized on the computer”, Mathias Lechner explains. And indeed, the team succeeded in teaching the virtual nerve system to balance a pole. “The result is a controller, which can solve a standard technology problem – stabilizing a pole, balanced on its tip. But no human being has written even one line of code for this controller, it just emerged by training a biological nerve system”, says Radu Grosu.

The team is going to explore the capabilities of such control-circuits further. The project raises the question, whether there is a fundamental difference between living nerve systems and computer code. Is machine learning and the activity of our brain the same on a fundamental level? At least we can be pretty sure that the simple nematode C. elegans does not care whether it lives as a worm in the ground or as a virtual worm on a computer hard drive.

Source: Technische Universität Wien : Dressierter Computerwurm lernt, einen Stab zu balancieren

Robot learns to mimic simple human motions

Researchers from the University of California, Berkeley, in the USA, have made some progress on this front by teaching code controlling a robot arm and hand to perform three tasks: grabbing an object and placing it in a specific position; pushing an object; and pushing and pulling an object after seeing the same action performed by a human arm.

Think picking up stuff, such as a toy, and placing it on a box, pushing a little car along a table, and so on.

The technique, described in a paper out this week, has been dubbed “one-shot imitation.” And, yes, it requires a lot of training before it can start copycatting people on demand. The idea is to educate the code to the point where it can immediately recognize movements, or similar movements, from its training, and replay them.

A few thousand videos depicting a human arm and a robot arm completing movements and actions are used to prime the control software. The same actions are repeated using different backgrounds, lighting effects, objects, and human arms to increase the depth of the machine-learning model’s awareness of how the limbs generally operate, and thus increase the chances of the robot successfully imitating a person on the fly.

Source: Is that you, T-1000? No, just a lil robot that can mimic humans on sight • The Register

Engineers design artificial synapse for “brain-on-a-chip” hardware

engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.
[…]
Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

Source: Engineers design artificial synapse for “brain-on-a-chip” hardware | MIT News

Revealing True Emotions Through Micro-Expressions: A Machine Learning Approach

Micro-expressions–involuntary, fleeting facial movements that reveal true emotions–hold valuable information for scenarios ranging from security interviews and interrogations to media analysis. They occur on various regions of the face, last only a fraction of a second, and are universal across cultures. In contrast to macro-expressions like big smiles and frowns, micro-expressions are extremely subtle and nearly impossible to suppress or fake. Because micro-expressions can reveal emotions people may be trying to hide, recognizing micro-expressions can aid DoD forensics and intelligence mission capabilities by providing clues to predict and intercept dangerous situations. This blog post, the latest highlighting research from the SEI Emerging Technology Center in machine emotional intelligence, describes our work on developing a prototype software tool to recognize micro-expressions in near real-time.

Source: Revealing True Emotions Through Micro-Expressions: A Machine Learning Approach

Facebook open sources Detectron, object detection framework in caffe2

Today, Facebook AI Research (FAIR) open sourced Detectron — our state-of-the-art platform for object detection research.

The Detectron project was started in July 2016 with the goal of creating a fast and flexible object detection system built on Caffe2, which was then in early alpha development. Over the last year and a half, the codebase has matured and supported a large number of our projects, including Mask R-CNN and Focal Loss for Dense Object Detection, which won the Marr Prize and Best Student Paper awards, respectively, at ICCV 2017. These algorithms, powered by Detectron, provide intuitive models for important computer vision tasks, such as instance segmentation, and have played a key role in the unprecedented advancement of visual perception systems that our community has achieved in recent years.

Source: Facebook open sources Detectron – Facebook Research

Active learning machine learns to create new quantum experiments

We present an autonomous learning model which learns to design such complex experiments, without relying on previous knowledge or often flawed intuition. Our system not only learns how to design desired experiments more efficiently than the best previous approaches, but in the process also discovers nontrivial experimental techniques. Our work demonstrates that learning machines can offer dramatic advances in how experiments are generated.
[…]
The artificial intelligence system learns to create a variety of entangled states and improves the efficiency of their realization. In the process, the system autonomously (re)discovers experimental techniques which are only now becoming standard in modern quantum optical experiments—a trait which was not explicitly demanded from the system but emerged through the process of learning. Such features highlight the possibility that machines could have a significantly more creative role in future research.

Source: Active learning machine learns to create new quantum experiments

The artificial agent develops new experiments by virtually placing mirrors, prisms or beam splitters on a virtual lab table. If its actions lead to a meaningful result, the agent has a higher chance of finding a similar sequence of actions in the future. This is known as a reinforcement learning strategy.

Read more at: https://phys.org/news/2018-01-artificial-agent-quantum.html#jCp

New AI System Predicts How Long Patients Will Live With Startling Accuracy

By using an artificially intelligent algorithm to predict patient mortality, a research team from Stanford University is hoping to improve the timing of end-of-life care for critically ill patients.

After parsing through 2 million records, the researchers identified 200,000 patients suitable for the project. The researchers were “agnostic” to disease type, disease stage, severity of admission (ICU versus non-ICU), and so on. All of these patients had associated case reports, including a diagnosis, the number of scans ordered, the types of procedures performed, the number of days spent in the hospital, medicines used, and other factors.

The deep learning algorithm studied the case reports from 160,000 of these patients, and was given the directive: “Given a patient and a date, predict the mortality of that patient within 12 months from that date, using EHR data of that patient from the prior year.” The system was trained to predict patient mortality within the next three to 12 months. Patients with less than three months of lifespan weren’t considered, as that would leave insufficient time for palliative care preparations.

Armed with its new skills, the algorithm was tasked with assessing the remaining 40,000 patients. It did quite well, successfully predicting patient mortality within the 3 to 12 month timespan in nine out of 10 cases. Around 95 percent of patients who were assessed with a low probability of dying within that period lived beyond 12 months. The pilot study proved successful, and the researchers are now hoping their system will be applied more broadly.

Source: New AI System Predicts How Long Patients Will Live With Startling Accuracy

When It Comes to Gorillas, Google Photos Remains Blind – it’s hard to take an AI to account

In a third test attempting to assess Google Photos’ view of people, WIRED also uploaded a collection of more than 10,000 images used in facial-recognition research. The search term “African american” turned up only an image of grazing antelope. Typing “black man,” “black woman,” or “black person,” caused Google’s system to return black-and-white images of people, correctly sorted by gender, but not filtered by race. The only search terms with results that appeared to select for people with darker skin tones were “afro” and “African,” although results were mixed.

A Google spokesperson confirmed that “gorilla” was censored from searches and image tags after the 2015 incident, and that “chimp,” “chimpanzee,” and “monkey” are also blocked today. “Image labeling technology is still early and unfortunately it’s nowhere near perfect,” the spokesperson wrote in an email, highlighting a feature of Google Photos that allows users to report mistakes.

Google’s caution around images of gorillas illustrates a shortcoming of existing machine-learning technology. With enough data and computing power, software can be trained to categorize images or transcribe speech to a high level of accuracy. But it can’t easily go beyond the experience of that training. And even the very best algorithms lack the ability to use common sense, or abstract concepts, to refine their interpretation of the world as humans do.

Source: When It Comes to Gorillas, Google Photos Remains Blind | WIRED

Boffins tweak audio by 0.1% to fool speech recognition engines

a paper by Nicholas Carlini and David Wagner of the University of California Berkeley has explained off a technique to trick speech recognition by changing the source waveform by 0.1 per cent.

The pair wrote at arXiv that their attack achieved a first: not merely an attack that made a speech recognition SR engine fail, but one that returned a result chosen by the attacker.In other words, because the attack waveform is 99.9 per cent identical to the original, a human wouldn’t notice what’s wrong with a recording of “it was the best of times, it was the worst of times”, but an AI could be tricked into transcribing it as something else entirely: the authors say it could produce “it is a truth universally acknowledged that a single” from a slightly-altered sample.

It works every single time: the pair claimed a 100 per cent success rate for their attack, and frighteningly, an attacker can even hide a target waveform in what (to the observer) appears to be silence.

Source: Boffins tweak audio by 0.1% to fool speech recognition engines • The Register

This Ex-NSA Hacker Is Building an AI to Find Hate Symbols on Twitter

NEMESIS, according to Crose, can help spot symbols that have been co-opted by hate groups to signal to each other in plain sight. At a glance, the way NEMESIS works is relatively simple. There’s an “inference graph,” which is a mathematical representation of trained images, classified as Nazi or white supremacist symbols. This inference graph trains the system with machine learning to identify the symbols in the wild, whether they are in pictures or videos.

Source: This Ex-NSA Hacker Is Building an AI to Find Hate Symbols on Twitter – Motherboard

AI System Sorts News Articles By Whether or Not They Contain Actual Information

In a recent paper published in the Journal of Artificial Intelligence Research, computer scientists Ani Nenkova and Yinfei Yang, of Google and the University of Pennsylvania, respectively, describe a new machine learning approach to classifying written journalism according to a formalized idea of “content density.” With an average accuracy of around 80 percent, their system was able to accurately classify news stories across a wide range of domains, spanning from international relations and business to sports and science journalism, when evaluated against a ground truth dataset of already correctly classified news articles.

Source: AI System Sorts News Articles By Whether or Not They Contain Actual Information – Motherboard

Google’s voice-generating AI is now indistinguishable from humans

A research paper published by Google this month—which has not been peer reviewed—details a text-to-speech system called Tacotron 2, which claims near-human accuracy at imitating audio of a person speaking from text.

The system is Google’s second official generation of the technology, which consists of two deep neural networks. The first network translates the text into a spectrogram (pdf), a visual way to represent audio frequencies over time. That spectrogram is then fed into WaveNet, a system from Alphabet’s AI research lab DeepMind, which reads the chart and generates the corresponding audio elements accordingly.

Source: Google’s voice-generating AI is now indistinguishable from humans — Quartz

Project Maven brings AI to the fight against ISIS

For years, the Defense Department’s most senior leadership has lamented the fact that US military and spy agencies, where artificial intelligence (AI) technology is concerned, lag far behind state-of-the-art commercial technology. Though US companies and universities lead the world in advanced AI research and commercialization, the US military still performs many activities in a style that would be familiar to the military of World War II.

As of this month, however, that has begun to change. Project Maven is a crash Defense Department program that was designed to deliver AI technologies—specifically, technologies that involve deep learning neural networks—to an active combat theater within six months from when the project received funding. Most defense acquisition programs take years or even decades to reach the battlefield, but technologies developed through Project Maven have already been successfully deployed in the fight against ISIS. Despite their rapid development and deployment, these technologies are getting strong praise from their military intelligence users. For the US national security community, Project Maven’s frankly incredible success foreshadows enormous opportunities ahead—as well as enormous organizational, ethical, and strategic challenges.
[…]
As its AI beachhead, the department chose Project Maven, which focuses on analysis of full-motion video data from tactical aerial drone platforms such as the ScanEagle and medium-altitude platforms such as the MQ-1C Gray Eagle and the MQ-9 Reaper. These drone platforms and their full-motion video sensors play a major role in the conflict against ISIS across the globe. The tactical and medium-altitude video sensors of the Scan Eagle, MQ-1C, and MQ-9 produce imagery that more or less resembles what you see on Google Earth. A single drone with these sensors produces many terabytes of data every day. Before AI was incorporated into analysis of this data, it took a team of analysts working 24 hours a day to exploit only a fraction of one drone’s sensor data.
[…]
Now that Project Maven has met the sky-high expectations of the department’s former second-ranking official, its success will likely spawn a hundred copycats throughout the military and intelligence community. The department must ensure that these copycats actually replicate Project Maven’s secret sauce—which is not merely its focus on AI technology. The project’s success was enabled by its organizational structure: a small, operationally focused, cross-functional team that was empowered to develop external partnerships, leverage existing infrastructure and platforms, and engage with user communities iteratively during development. AI needs to be woven throughout the fabric of the Defense Department, and many existing department institutions will have to adopt project management structures similar to Maven’s if they are to run effective AI acquisition programs.
[…]
To its credit, the department selected drone video footage as an AI beachhead because it wanted to avoid some of the more thorny ethical and strategic challenges associated with automation in warfare. As US military and intelligence agencies implement modern AI technology across a much more diverse set of missions, they will face wrenching strategic, ethical, and legal challenges—which Project Maven’s narrow focus helped it avoid.

Source: Project Maven brings AI to the fight against ISIS | Bulletin of the Atomic Scientists

Canada to use AI to Study ‘Suicide-Related Behavior’ on Social Media

his month the Canadian government is launching a pilot program to research and predict suicide rates in the country using artificial intelligence. The pilot will mine Canadians’ social media posts “in order to identify patterns associated with users who discuss suicide-related behavior,” according to a recently published contract document.

Source: Canada Is Using AI to Study ‘Suicide-Related Behavior’ on Social Media

Using stickers in the field of view to fool image recognition AIs

In a research paper presented in December through a workshop at the 31st Conference on Neural Information Processing Systems (NIPS 2017) and made available last week through ArXiv, a team of researchers from Google discuss a technique for creating an adversarial patch.

This patch, sticker, or cutout consists of a psychedelic graphic which, when placed next to an object like a banana, makes image recognition software see something entirely different, such as a toaster.
[…]
“We construct an attack that does not attempt to subtly transform an existing item into another,” the researchers explain. “Instead, this attack generates an image-independent patch that is extremely salient to a neural network. This patch can then be placed anywhere within the field of view of the classifier, and causes the classifier to output a targeted class.”

The boffins observe that because the patch is separate from the scene, it allows attacks on image recognition systems without concern for lighting conditions, camera angles, the type of classifier being attacked, or other objects present in the scene.

While the ruse recalls schemes to trick face scanning systems with geometric makeup patterns, it doesn’t involve altering the salient object in the scene. The addition of the adversarial patch to the scene is enough to confuse the image classification code.

Source: Now that’s sticker shock: Sticky labels make image-recog AI go bananas for toasters • The Register