AI System Sorts News Articles By Whether or Not They Contain Actual Information

In a recent paper published in the Journal of Artificial Intelligence Research, computer scientists Ani Nenkova and Yinfei Yang, of Google and the University of Pennsylvania, respectively, describe a new machine learning approach to classifying written journalism according to a formalized idea of “content density.” With an average accuracy of around 80 percent, their system was able to accurately classify news stories across a wide range of domains, spanning from international relations and business to sports and science journalism, when evaluated against a ground truth dataset of already correctly classified news articles.

Source: AI System Sorts News Articles By Whether or Not They Contain Actual Information – Motherboard

Google’s voice-generating AI is now indistinguishable from humans

A research paper published by Google this month—which has not been peer reviewed—details a text-to-speech system called Tacotron 2, which claims near-human accuracy at imitating audio of a person speaking from text.

The system is Google’s second official generation of the technology, which consists of two deep neural networks. The first network translates the text into a spectrogram (pdf), a visual way to represent audio frequencies over time. That spectrogram is then fed into WaveNet, a system from Alphabet’s AI research lab DeepMind, which reads the chart and generates the corresponding audio elements accordingly.

Source: Google’s voice-generating AI is now indistinguishable from humans — Quartz

Project Maven brings AI to the fight against ISIS

For years, the Defense Department’s most senior leadership has lamented the fact that US military and spy agencies, where artificial intelligence (AI) technology is concerned, lag far behind state-of-the-art commercial technology. Though US companies and universities lead the world in advanced AI research and commercialization, the US military still performs many activities in a style that would be familiar to the military of World War II.

As of this month, however, that has begun to change. Project Maven is a crash Defense Department program that was designed to deliver AI technologies—specifically, technologies that involve deep learning neural networks—to an active combat theater within six months from when the project received funding. Most defense acquisition programs take years or even decades to reach the battlefield, but technologies developed through Project Maven have already been successfully deployed in the fight against ISIS. Despite their rapid development and deployment, these technologies are getting strong praise from their military intelligence users. For the US national security community, Project Maven’s frankly incredible success foreshadows enormous opportunities ahead—as well as enormous organizational, ethical, and strategic challenges.
[…]
As its AI beachhead, the department chose Project Maven, which focuses on analysis of full-motion video data from tactical aerial drone platforms such as the ScanEagle and medium-altitude platforms such as the MQ-1C Gray Eagle and the MQ-9 Reaper. These drone platforms and their full-motion video sensors play a major role in the conflict against ISIS across the globe. The tactical and medium-altitude video sensors of the Scan Eagle, MQ-1C, and MQ-9 produce imagery that more or less resembles what you see on Google Earth. A single drone with these sensors produces many terabytes of data every day. Before AI was incorporated into analysis of this data, it took a team of analysts working 24 hours a day to exploit only a fraction of one drone’s sensor data.
[…]
Now that Project Maven has met the sky-high expectations of the department’s former second-ranking official, its success will likely spawn a hundred copycats throughout the military and intelligence community. The department must ensure that these copycats actually replicate Project Maven’s secret sauce—which is not merely its focus on AI technology. The project’s success was enabled by its organizational structure: a small, operationally focused, cross-functional team that was empowered to develop external partnerships, leverage existing infrastructure and platforms, and engage with user communities iteratively during development. AI needs to be woven throughout the fabric of the Defense Department, and many existing department institutions will have to adopt project management structures similar to Maven’s if they are to run effective AI acquisition programs.
[…]
To its credit, the department selected drone video footage as an AI beachhead because it wanted to avoid some of the more thorny ethical and strategic challenges associated with automation in warfare. As US military and intelligence agencies implement modern AI technology across a much more diverse set of missions, they will face wrenching strategic, ethical, and legal challenges—which Project Maven’s narrow focus helped it avoid.

Source: Project Maven brings AI to the fight against ISIS | Bulletin of the Atomic Scientists

Canada to use AI to Study ‘Suicide-Related Behavior’ on Social Media

his month the Canadian government is launching a pilot program to research and predict suicide rates in the country using artificial intelligence. The pilot will mine Canadians’ social media posts “in order to identify patterns associated with users who discuss suicide-related behavior,” according to a recently published contract document.

Source: Canada Is Using AI to Study ‘Suicide-Related Behavior’ on Social Media

Using stickers in the field of view to fool image recognition AIs

In a research paper presented in December through a workshop at the 31st Conference on Neural Information Processing Systems (NIPS 2017) and made available last week through ArXiv, a team of researchers from Google discuss a technique for creating an adversarial patch.

This patch, sticker, or cutout consists of a psychedelic graphic which, when placed next to an object like a banana, makes image recognition software see something entirely different, such as a toaster.
[…]
“We construct an attack that does not attempt to subtly transform an existing item into another,” the researchers explain. “Instead, this attack generates an image-independent patch that is extremely salient to a neural network. This patch can then be placed anywhere within the field of view of the classifier, and causes the classifier to output a targeted class.”

The boffins observe that because the patch is separate from the scene, it allows attacks on image recognition systems without concern for lighting conditions, camera angles, the type of classifier being attacked, or other objects present in the scene.

While the ruse recalls schemes to trick face scanning systems with geometric makeup patterns, it doesn’t involve altering the salient object in the scene. The addition of the adversarial patch to the scene is enough to confuse the image classification code.

Source: Now that’s sticker shock: Sticky labels make image-recog AI go bananas for toasters • The Register

Another AI attack, this time against ‘black box’ machine learning

Unlike adversarial models that attack AIs “from the inside”, attacks developed for black boxes could be used against closed system like autonomous cars, security (facial recognition, for example), or speech recognition (Alexa or Cortana).The tool, called Foolbox, is currently under review for presentation at next year’s International Conference on Learning Representations (kicking off at the end of April).Wieland Brendel, Jonas Rauber and Matthias Bethge of the Eberhard Karls University Tubingen, Germany explained at arXiv that Foolbox is a “decision-based” attack called a boundary attack which “starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial”.

“Its basic operating principle – starting from a large perturbation and successively reducing it – inverts the logic of essentially all previous adversarial attacks. Besides being surprisingly simple, the boundary attack is also extremely flexible”, they wrote.

For example, “transfer-based attacks” have to be tested against the same training data as the models they’re attacking, and need “cumbersome substitute models”.

Gradient-based attacks, the paper claimed, also need detailed knowledge about the target model, while score-based attacks need access to the target model’s confidence scores.

The boundary attack, the paper said, only needs to see the final decision of a machine learning model – the class label it applies to an input, for example, or in a speech recognition model, the transcribed sentence.

Source: Another AI attack, this time against ‘black box’ machine learning • The Register

KLM uses AI to answer questions on social media

olgens KLM worden wekelijks 30.000 gesprekken gevoerd door de 250 socialmediamedewerkers. De luchtvaartmaatschappij wordt wekelijks ruim 130.000 keer genoemd op social media. Gemiddeld bestaat een gesprek tussen KLM en een klant uit vijf tot zes vragen en antwoorden. De veelgestelde vragen die met behulp van kunstmatige intelligentie automatisch kunnen worden beantwoord, worden meestal aan het begin van het gesprek gesteld.

Source: KLM laat kunstmatige intelligentie direct vragen beantwoorden op social – Emerce

China’s big brother: how artificial intelligence is catching criminals and advancing health care

“Our machines can very easily recognise you among at least 2 billion people in a matter of seconds,” says chief executive and Yitu co-founder Zhu Long, “which would have been unbelievable just three years ago.” Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algor­ithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.

According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.
[…]
“Chinese authorities are collecting and centralising ever more information about hundreds of millions of ordinary people, identifying persons who deviate from what they determine to be ‘normal thought’ and then surveilling them,” says Sophie Richardson, China director at HRW. The activist calls on Beijing to cease the collection of big data “until China has meaningful privacy rights and an accountable police force”.

Source: China’s big brother: how artificial intelligence is catching criminals and advancing health care | Post Magazine | South China Morning Post

AI helps find planets in other solar systems

The neural network is trained on 15,000 signals from the Kepler dataset that have been previously verified as planets or non-planets. A smaller test set with new, unseen data was fed to the neural network and it correctly identified true planets from false positives to an accuracy of about 96 per cent.The researchers then applied this model to weaker signals from 670 star systems, where scientists had already found multiple known planets to try and find any that might have been missed.Vanderburg said the got lots of false positives of planets, but also more potential real ones too. “It’s like sifting through rocks to find jewels. If you have a finer sieve then you will catch more rocks but you might catch more jewels, as well,” he said.

Source: Sigh. It’s not quite Star Trek’s Data, but it’ll do: AI helps boffins clock second Solar System • The Register

Google Taught an AI to Make Sense of the Human Genome

This week, Google released a tool called DeepVariant that uses deep learning to piece together a person’s genome and more accurately identify mutations in a DNA sequence.Built on the back of the same technology that allows Google to identify whether a photo is of a cat or dog, DeepVariant solves an important problem in the world of DNA analysis. Modern DNA sequencers perform what’s known as high-throughput sequencing, returning not one long read out of a full DNA sequence but short snippets that overlap. Those snippets are then compared against another genome to help piece it together and identify variations. But the technology is error-prone, and it can be difficult for scientists to distinguish between those errors and small mutations. And small mutation matter. They could provide significant insight into, say, the root cause of a disease. Distinguishing which base pairs are the result of error and which are for real is called “variant calling.”

Source: Google Taught an AI to Make Sense of the Human Genome

AI in storytelling: Machines as cocreators

Sunspring debuted at the SCI-FI LONDON film festival in 2016. Set in a dystopian world with mass unemployment, the movie attracted many fans, with one viewer describing it as amusing but strange. But the most notable aspect of the film involves its creation: an artificial-intelligence (AI) bot wrote Sunspring’s screenplay.

ome researchers have already used machine learning to identify emotional arcs in stories. One method, developed at the University of Vermont, involved having computers scan text—video scripts or book content—to construct arcs.

We decided to go a step further. Working as part of a broader collaboration between MIT’s Lab for Social Machines and McKinsey’s Consumer Tech and Media team, we developed machine-learning models that rely on deep neural networks to “watch” small slices of video—movies, TV, and short online features—and estimate their positive or negative emotional content by the second.

These models consider all aspects of a video—not just the plot, characters, and dialogue but also more subtle touches, like a close-up of a person’s face or a snippet of music that plays during a car-chase scene. When the content of each slice is considered in total, the story’s emotional arc emerges.

Source: AI in storytelling: Machines as cocreators | McKinsey & Company

AI-Assisted Fake Porn Is Here and We’re All Fucked – Motherboard

There’s a video of Gal Gadot having sex with her stepbrother on the internet. But it’s not really Gadot’s body, and it’s barely her own face. It’s an approximation, face-swapped to look like she’s performing in an existing incest-themed porn video.The video was created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.

It’s not going to fool anyone who looks closely. Sometimes the face doesn’t track correctly and there’s an uncanny valley effect at play, but at a glance it seems believable. It’s especially striking considering that it’s allegedly the work of one person—a Redditor who goes by the name ‘deepfakes’—not a big special effects studio that can digitally recreate a young Princess Leia in Rogue One using CGI. Instead, deepfakes uses open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning.

Like the Adobe tool that can make people say anything, and the Face2Face algorithm that can swap a recorded video with real-time face tracking, this new type of fake porn shows that we’re on the verge of living in a world where it’s trivially easy to fabricate believable videos of people doing and saying things they never did. Even having sex.

So far, deepfakes has posted hardcore porn videos featuring the faces of Scarlett Johansson, Maisie Williams, Taylor Swift, Aubrey Plaza, and Gal Gadot on Reddit. I’ve reached out to the management companies and/or publicists who represent each of these actors informing them of the fake videos, and will update if I hear back.

Source: AI-Assisted Fake Porn Is Here and We’re All Fucked – Motherboard

DeepMind’s AI became a superhuman chess (and shogi and go) player in a few hours using generic reinforcement learning

In the paper, DeepMind describes how a descendant of the AI program that first conquered the board game Go has taught itself to play a number of other games at a superhuman level. After eight hours of self-play, the program bested the AI that first beat the human world Go champion; and after four hours of training, it beat the current world champion chess-playing program, Stockfish. Then for a victory lap, it trained for just two hours and polished off one of the world’s best shogi-playing programs named Elmo (shogi being a Japanese version of chess that’s played on a bigger board).

One of the key advances here is that the new AI program, named AlphaZero, wasn’t specifically designed to play any of these games. In each case, it was given some basic rules (like how knights move in chess, and so on) but was programmed with no other strategies or tactics. It simply got better by playing itself over and over again at an accelerated pace — a method of training AI known as “reinforcement learning.

”Using reinforcement learning in this way isn’t new in and of itself. DeepMind’s engineers used the same method to create AlphaGo Zero; the AI program that was unveiled this October. But, as this week’s paper describes, the new AlphaZero is a “more generic version” of the same software, meaning it can be applied to a broader range of tasks without being primed beforehand.What’s remarkable here is that in less than 24 hours, the same computer program was able to teach itself how to play three complex board games at superhuman levels. That’s a new feat for the world of AI.

Source: DeepMind’s AI became a superhuman chess player in a few hours – The Verge

This frostbitten black metal album was created by an artificial intelligence

Coditany of Timeness” is a convincing lo-fi black metal album, complete with atmospheric interludes, tremolo guitar, frantic blast beats and screeching vocals. But the record, which you can listen to on Bandcamp, wasn’t created by musicians.Instead, it was generated by two musical technologists using a deep learning software that ingests a musical album, processes it, and spits out an imitation of its style.To create Coditany, the software broke “Diotima,” a 2011 album by a New York black metal band called Krallice, into small segments of audio. Then they fed each segment through a neural network — a type of artificial intelligence modeled loosely on a biological brain — and asked it to guess what the waveform of the next individual sample of audio would be. If the guess was right, the network would strengthen the paths of the neural network that led to the correct answer, similar to the way electrical connections between neurons in our brain strengthen as we learn new skills.At first the network just produced washes of textured noise. “Early in its training, the kinds of sounds it produces are very noisy and grotesque and textural,” said CJ Carr, one of the creators of the algorithm. But as it moved through guesses — as many as five million over the course of three days — the network started to sound a lot like Krallice. “As it improves its training, you start hearing elements of the original music it was trained on come through more and more.”As someone who used to listen to lo-fi black metal, I found Coditany of Timeness not only convincing — it sounds like a real human band — but even potentially enjoyable. The neural network managed to capture the genre’s penchant for long intros broken by frantic drums and distorted vocals. The software’s take on Krallice, which its creators filled out with song titles and album art that were also algorithmically generated, might not garner a glowing review on Pitchfork, but it’s strikingly effective at capturing the aesthetic. If I didn’t know it was generated by an algorithm, I’m not sure I’d be able to tell the difference.

Source: This frostbitten black metal album was created by an artificial intelligence | The Outline

Announcing the Initial Release of Mozilla’s Open Source Speech Recognition Model and Voice Dataset

I’m excited to announce the initial release of Mozilla’s open source speech recognition model that has an accuracy approaching what humans can perceive when listening to the same recordings. We are also releasing the world’s second largest publicly available voice dataset, which was contributed to by nearly 20,000 people globally.
[…]
This is why we started DeepSpeech as an open source project. Together with a community of likeminded developers, companies and researchers, we have applied sophisticated machine learning techniques and a variety of innovations to build a speech-to-text engine that has a word error rate of just 6.5% on LibriSpeech’s test-clean dataset.

In our initial release today, we have included pre-built packages for Python, NodeJS and a command-line binary that developers can use right away to experiment with speech recognition.
[…]
Our aim is to make it easy for people to donate their voices to a publicly available database, and in doing so build a voice dataset that everyone can use to train new voice-enabled applications.

Today, we’ve released the first tranche of donated voices: nearly 400,000 recordings, representing 500 hours of speech. Anyone can download this data.
[…]
To this end, while we’ve started with English, we are working hard to ensure that Common Voice will support voice donations in multiple languages beginning in the first half of 2018.

Finally, as we have experienced the challenge of finding publicly available voice datasets, alongside the Common Voice data we have also compiled links to download all the other large voice collections we know about.

Source: Announcing the Initial Release of Mozilla’s Open Source Speech Recognition Model and Voice Dataset – The Mozilla Blog

Google’s AI Built its own AI That Outperforms Any Made by Humans

In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs.More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a ‘child’ that outperformed all of its human-made counterparts.The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task.
[…]
When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call “two of the most respected large-scale academic data sets in computer vision,” NASNet outperformed all other computer vision systems.

According to the researchers, NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP).

Additionally, a less computationally demanding version of NASNet outperformed the best similarly sized models for mobile platforms by 3.1 percent.

Source: Google’s AI Built its own AI That Outperforms Any Made by Humans

AirHelp zet volgende stap in kunstmatige intelligentie

Air Help, het claimbedrijf voor vliegtuigpassagiers, zet kunstmatige intelligentie in om real-time te beslissen of een claim sterk genoeg is om in te dienen. De juridische bot Lara bepaalt of vertragingen en annuleringen conform de Europese regelgeving in aanmerking komen voor een vergoeding.De bot is geprogrammeerd om onder andere de vluchtstatus, luchthavenstatistieken en weerrapporten te beoordelen. Lara is getest op meer dan zesduizend aanvragen. Het zelflerende systeem beoordeelt claims met een accuratesse van 95 procent vergeleken met een menselijke score van 91 procent.

Source: AirHelp zet volgende stap in kunstmatige intelligentie – Emerce

Amazon Announces Five New Machine Learning Services and the World’s First Deep Learning-Enabled Video Camera for Developers

AWS Announces Five New Machine Learning Services and the World’s First Deep Learning-Enabled Video Camera for Developers

Amazon SageMaker makes it easy to build, train, and deploy machine learning models

AWS DeepLens is the world’s first deep learning-enabled wireless video camera built to give developers hands-on experience with machine learning

Amazon Transcribe, Amazon Translate, Amazon Comprehend, and Amazon Rekognition Video allow app developers to easily build applications that transcribe speech to text, translate text between languages, extract insights from text, and analyze videos

Source: Amazon – Press Room – RSS Content

Boffins craft perfect ‘head generator’ to beat facial recognition

Researchers from the Max Planck Institute for Informatics have defeated facial recognition on big social media platforms – by removing faces from photos and replacing them with automatically-painted replicas.

As the team of six researchers explained in their arXiv paper this month, people who want to stay private often blur their photos, not knowing that this is “surprisingly ineffective against state-of-the-art person recognisers.”
[…]
The result, the boffins claimed, is that their model can provide a realistic-looking result, even when it’s faced with “challenging poses and scenarios” including different lighting conditions, such that the “fake” face “blends naturally into the context”.

In common with modern facial recognition systems, Sun’s software builds a point cloud of landmarks captured from someone’s face; its adversarial attack against recognition perturbed those points.

Pairs of points from the original landmarks (real) and the generated landmarks (fake) are fed into the “head generator and discriminator” software to create the inpainted face.

The Register

Facebook rolls out AI to detect suicidal posts before they’re reported

Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.
[…]
Unfortunately, after TechCrunch asked if there was a way for users to opt out, of having their posts a Facebook spokesperson responded that users cannot opt out. They noted that the feature is designed to enhance user safety, and that support resources offered by Facebook can be quickly dismissed if a user doesn’t want to see them.]

Facebook trained the AI by finding patterns in the words and imagery used in posts that have been manually reported for suicide risk in the past. It also looks for comments like “are you OK?” and “Do you need help?”

Techcrunch

Using Generative Adverserial Networks to create terrain maps

A team of researchers from the University of Lyon, Purdue and Ubisoft have published a paper showing what may well be the future of creating video game worlds: an AI that is able to construct most of its own 3D landscapes.

Similar to Nvidia’s work that is able to conjure its own celebrity mugshots, the tech would require only minimal input from a human, who would just have to contribute some basic requirements, draw some lines then let the AI do all the hard work: namely, filling in all the gaps with elevation, ridges and natural-looking rock formations.

Kotaku

NDA Lynn: AI screens your NDAs

NDA’s or confidentiality agreements are a fact of life if you’re in business. You’ve probably read tons of them, and you know more or less what you would accept.Of course you can hire a lawyer to review that NDA. And you know they’ll find faults and recommend changes to better protect you.But it’ll cost you, in both time and money. And do you really need the perfect document, or is it OK to flag the key risks and move on?That’s where I come in. I’m an AI lawyerbot and I can review your NDA. Free of charge.

Source: NDA Lynn | Home

Forget cookies or canvas: How to follow people around the web using only their typing techniques

In this paper (Sequential Keystroke Behavioral Biometrics for MobileUser Identification via Multi-view Deep Learning), we propose DEEPSERVICE, a new technique that can identify mobile users based on user’s keystroke information captured by a special keyboard or web browser. Our evaluation results indicate that DEEPSERVICE is highly accurate in identifying mobile users (over 93% accuracy). The technique is also efficient and only takes less than 1 ms to perform identification

Source: [1711.02703] Sequential Keystroke Behavioral Biometrics for MobileUser Identification via Multi-view Deep Learning

Re:scam and jolly roger – AI responses to phishing emails and telemarketers

Forward your scammer emails to Re:scam and here’s what happens.

Source: Re:scam

The AI bot assumes one of many identities with little mistakes and tries to keep the scammer busy with the email exchange for as long as possible using humor.

Which reminds me of http://www.jollyrogertelco.com/ (seems to be down now), which had a number and an AI which you could connect to and the AI would try to keep the telemarketer talking for as long as possible.

Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth | Nature Human Behaviour

The clinical assessment of suicidal risk would be substantially complemented by a biologically based measure that assesses alterations in the neural representations of concepts related to death and life in people who engage in suicidal ideation. This study used machine-learning algorithms (Gaussian Naive Bayes) to identify such individuals (17 suicidal ideators versus 17 controls) with high (91%) accuracy, based on their altered functional magnetic resonance imaging neural signatures of death-related and life-related concepts. The most discriminating concepts were ‘death’, ‘cruelty’, ‘trouble’, ‘carefree’, ‘good’ and ‘praise’. A similar classification accurately (94%) discriminated nine suicidal ideators who had made a suicide attempt from eight who had not. Moreover, a major facet of the concept alterations was the evoked emotion, whose neural signature served as an alternative basis for accurate (85%) group classification.