Another AI attack, this time against ‘black box’ machine learning

Unlike adversarial models that attack AIs “from the inside”, attacks developed for black boxes could be used against closed system like autonomous cars, security (facial recognition, for example), or speech recognition (Alexa or Cortana).The tool, called Foolbox, is currently under review for presentation at next year’s International Conference on Learning Representations (kicking off at the end of April).Wieland Brendel, Jonas Rauber and Matthias Bethge of the Eberhard Karls University Tubingen, Germany explained at arXiv that Foolbox is a “decision-based” attack called a boundary attack which “starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial”.

“Its basic operating principle – starting from a large perturbation and successively reducing it – inverts the logic of essentially all previous adversarial attacks. Besides being surprisingly simple, the boundary attack is also extremely flexible”, they wrote.

For example, “transfer-based attacks” have to be tested against the same training data as the models they’re attacking, and need “cumbersome substitute models”.

Gradient-based attacks, the paper claimed, also need detailed knowledge about the target model, while score-based attacks need access to the target model’s confidence scores.

The boundary attack, the paper said, only needs to see the final decision of a machine learning model – the class label it applies to an input, for example, or in a speech recognition model, the transcribed sentence.

Source: Another AI attack, this time against ‘black box’ machine learning • The Register

KLM uses AI to answer questions on social media

olgens KLM worden wekelijks 30.000 gesprekken gevoerd door de 250 socialmediamedewerkers. De luchtvaartmaatschappij wordt wekelijks ruim 130.000 keer genoemd op social media. Gemiddeld bestaat een gesprek tussen KLM en een klant uit vijf tot zes vragen en antwoorden. De veelgestelde vragen die met behulp van kunstmatige intelligentie automatisch kunnen worden beantwoord, worden meestal aan het begin van het gesprek gesteld.

Source: KLM laat kunstmatige intelligentie direct vragen beantwoorden op social – Emerce

China’s big brother: how artificial intelligence is catching criminals and advancing health care

“Our machines can very easily recognise you among at least 2 billion people in a matter of seconds,” says chief executive and Yitu co-founder Zhu Long, “which would have been unbelievable just three years ago.” Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algor­ithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.

According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.
[…]
“Chinese authorities are collecting and centralising ever more information about hundreds of millions of ordinary people, identifying persons who deviate from what they determine to be ‘normal thought’ and then surveilling them,” says Sophie Richardson, China director at HRW. The activist calls on Beijing to cease the collection of big data “until China has meaningful privacy rights and an accountable police force”.

Source: China’s big brother: how artificial intelligence is catching criminals and advancing health care | Post Magazine | South China Morning Post

AI helps find planets in other solar systems

The neural network is trained on 15,000 signals from the Kepler dataset that have been previously verified as planets or non-planets. A smaller test set with new, unseen data was fed to the neural network and it correctly identified true planets from false positives to an accuracy of about 96 per cent.The researchers then applied this model to weaker signals from 670 star systems, where scientists had already found multiple known planets to try and find any that might have been missed.Vanderburg said the got lots of false positives of planets, but also more potential real ones too. “It’s like sifting through rocks to find jewels. If you have a finer sieve then you will catch more rocks but you might catch more jewels, as well,” he said.

Source: Sigh. It’s not quite Star Trek’s Data, but it’ll do: AI helps boffins clock second Solar System • The Register

Google Taught an AI to Make Sense of the Human Genome

This week, Google released a tool called DeepVariant that uses deep learning to piece together a person’s genome and more accurately identify mutations in a DNA sequence.Built on the back of the same technology that allows Google to identify whether a photo is of a cat or dog, DeepVariant solves an important problem in the world of DNA analysis. Modern DNA sequencers perform what’s known as high-throughput sequencing, returning not one long read out of a full DNA sequence but short snippets that overlap. Those snippets are then compared against another genome to help piece it together and identify variations. But the technology is error-prone, and it can be difficult for scientists to distinguish between those errors and small mutations. And small mutation matter. They could provide significant insight into, say, the root cause of a disease. Distinguishing which base pairs are the result of error and which are for real is called “variant calling.”

Source: Google Taught an AI to Make Sense of the Human Genome

AI in storytelling: Machines as cocreators

Sunspring debuted at the SCI-FI LONDON film festival in 2016. Set in a dystopian world with mass unemployment, the movie attracted many fans, with one viewer describing it as amusing but strange. But the most notable aspect of the film involves its creation: an artificial-intelligence (AI) bot wrote Sunspring’s screenplay.

ome researchers have already used machine learning to identify emotional arcs in stories. One method, developed at the University of Vermont, involved having computers scan text—video scripts or book content—to construct arcs.

We decided to go a step further. Working as part of a broader collaboration between MIT’s Lab for Social Machines and McKinsey’s Consumer Tech and Media team, we developed machine-learning models that rely on deep neural networks to “watch” small slices of video—movies, TV, and short online features—and estimate their positive or negative emotional content by the second.

These models consider all aspects of a video—not just the plot, characters, and dialogue but also more subtle touches, like a close-up of a person’s face or a snippet of music that plays during a car-chase scene. When the content of each slice is considered in total, the story’s emotional arc emerges.

Source: AI in storytelling: Machines as cocreators | McKinsey & Company

AI-Assisted Fake Porn Is Here and We’re All Fucked – Motherboard

There’s a video of Gal Gadot having sex with her stepbrother on the internet. But it’s not really Gadot’s body, and it’s barely her own face. It’s an approximation, face-swapped to look like she’s performing in an existing incest-themed porn video.The video was created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.

It’s not going to fool anyone who looks closely. Sometimes the face doesn’t track correctly and there’s an uncanny valley effect at play, but at a glance it seems believable. It’s especially striking considering that it’s allegedly the work of one person—a Redditor who goes by the name ‘deepfakes’—not a big special effects studio that can digitally recreate a young Princess Leia in Rogue One using CGI. Instead, deepfakes uses open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning.

Like the Adobe tool that can make people say anything, and the Face2Face algorithm that can swap a recorded video with real-time face tracking, this new type of fake porn shows that we’re on the verge of living in a world where it’s trivially easy to fabricate believable videos of people doing and saying things they never did. Even having sex.

So far, deepfakes has posted hardcore porn videos featuring the faces of Scarlett Johansson, Maisie Williams, Taylor Swift, Aubrey Plaza, and Gal Gadot on Reddit. I’ve reached out to the management companies and/or publicists who represent each of these actors informing them of the fake videos, and will update if I hear back.

Source: AI-Assisted Fake Porn Is Here and We’re All Fucked – Motherboard

DeepMind’s AI became a superhuman chess (and shogi and go) player in a few hours using generic reinforcement learning

In the paper, DeepMind describes how a descendant of the AI program that first conquered the board game Go has taught itself to play a number of other games at a superhuman level. After eight hours of self-play, the program bested the AI that first beat the human world Go champion; and after four hours of training, it beat the current world champion chess-playing program, Stockfish. Then for a victory lap, it trained for just two hours and polished off one of the world’s best shogi-playing programs named Elmo (shogi being a Japanese version of chess that’s played on a bigger board).

One of the key advances here is that the new AI program, named AlphaZero, wasn’t specifically designed to play any of these games. In each case, it was given some basic rules (like how knights move in chess, and so on) but was programmed with no other strategies or tactics. It simply got better by playing itself over and over again at an accelerated pace — a method of training AI known as “reinforcement learning.

”Using reinforcement learning in this way isn’t new in and of itself. DeepMind’s engineers used the same method to create AlphaGo Zero; the AI program that was unveiled this October. But, as this week’s paper describes, the new AlphaZero is a “more generic version” of the same software, meaning it can be applied to a broader range of tasks without being primed beforehand.What’s remarkable here is that in less than 24 hours, the same computer program was able to teach itself how to play three complex board games at superhuman levels. That’s a new feat for the world of AI.

Source: DeepMind’s AI became a superhuman chess player in a few hours – The Verge

This frostbitten black metal album was created by an artificial intelligence

Coditany of Timeness” is a convincing lo-fi black metal album, complete with atmospheric interludes, tremolo guitar, frantic blast beats and screeching vocals. But the record, which you can listen to on Bandcamp, wasn’t created by musicians.Instead, it was generated by two musical technologists using a deep learning software that ingests a musical album, processes it, and spits out an imitation of its style.To create Coditany, the software broke “Diotima,” a 2011 album by a New York black metal band called Krallice, into small segments of audio. Then they fed each segment through a neural network — a type of artificial intelligence modeled loosely on a biological brain — and asked it to guess what the waveform of the next individual sample of audio would be. If the guess was right, the network would strengthen the paths of the neural network that led to the correct answer, similar to the way electrical connections between neurons in our brain strengthen as we learn new skills.At first the network just produced washes of textured noise. “Early in its training, the kinds of sounds it produces are very noisy and grotesque and textural,” said CJ Carr, one of the creators of the algorithm. But as it moved through guesses — as many as five million over the course of three days — the network started to sound a lot like Krallice. “As it improves its training, you start hearing elements of the original music it was trained on come through more and more.”As someone who used to listen to lo-fi black metal, I found Coditany of Timeness not only convincing — it sounds like a real human band — but even potentially enjoyable. The neural network managed to capture the genre’s penchant for long intros broken by frantic drums and distorted vocals. The software’s take on Krallice, which its creators filled out with song titles and album art that were also algorithmically generated, might not garner a glowing review on Pitchfork, but it’s strikingly effective at capturing the aesthetic. If I didn’t know it was generated by an algorithm, I’m not sure I’d be able to tell the difference.

Source: This frostbitten black metal album was created by an artificial intelligence | The Outline

Announcing the Initial Release of Mozilla’s Open Source Speech Recognition Model and Voice Dataset

I’m excited to announce the initial release of Mozilla’s open source speech recognition model that has an accuracy approaching what humans can perceive when listening to the same recordings. We are also releasing the world’s second largest publicly available voice dataset, which was contributed to by nearly 20,000 people globally.
[…]
This is why we started DeepSpeech as an open source project. Together with a community of likeminded developers, companies and researchers, we have applied sophisticated machine learning techniques and a variety of innovations to build a speech-to-text engine that has a word error rate of just 6.5% on LibriSpeech’s test-clean dataset.

In our initial release today, we have included pre-built packages for Python, NodeJS and a command-line binary that developers can use right away to experiment with speech recognition.
[…]
Our aim is to make it easy for people to donate their voices to a publicly available database, and in doing so build a voice dataset that everyone can use to train new voice-enabled applications.

Today, we’ve released the first tranche of donated voices: nearly 400,000 recordings, representing 500 hours of speech. Anyone can download this data.
[…]
To this end, while we’ve started with English, we are working hard to ensure that Common Voice will support voice donations in multiple languages beginning in the first half of 2018.

Finally, as we have experienced the challenge of finding publicly available voice datasets, alongside the Common Voice data we have also compiled links to download all the other large voice collections we know about.

Source: Announcing the Initial Release of Mozilla’s Open Source Speech Recognition Model and Voice Dataset – The Mozilla Blog

Google’s AI Built its own AI That Outperforms Any Made by Humans

In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs.More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a ‘child’ that outperformed all of its human-made counterparts.The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task.
[…]
When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call “two of the most respected large-scale academic data sets in computer vision,” NASNet outperformed all other computer vision systems.

According to the researchers, NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP).

Additionally, a less computationally demanding version of NASNet outperformed the best similarly sized models for mobile platforms by 3.1 percent.

Source: Google’s AI Built its own AI That Outperforms Any Made by Humans

AirHelp zet volgende stap in kunstmatige intelligentie

Air Help, het claimbedrijf voor vliegtuigpassagiers, zet kunstmatige intelligentie in om real-time te beslissen of een claim sterk genoeg is om in te dienen. De juridische bot Lara bepaalt of vertragingen en annuleringen conform de Europese regelgeving in aanmerking komen voor een vergoeding.De bot is geprogrammeerd om onder andere de vluchtstatus, luchthavenstatistieken en weerrapporten te beoordelen. Lara is getest op meer dan zesduizend aanvragen. Het zelflerende systeem beoordeelt claims met een accuratesse van 95 procent vergeleken met een menselijke score van 91 procent.

Source: AirHelp zet volgende stap in kunstmatige intelligentie – Emerce

Amazon Announces Five New Machine Learning Services and the World’s First Deep Learning-Enabled Video Camera for Developers

AWS Announces Five New Machine Learning Services and the World’s First Deep Learning-Enabled Video Camera for Developers

Amazon SageMaker makes it easy to build, train, and deploy machine learning models

AWS DeepLens is the world’s first deep learning-enabled wireless video camera built to give developers hands-on experience with machine learning

Amazon Transcribe, Amazon Translate, Amazon Comprehend, and Amazon Rekognition Video allow app developers to easily build applications that transcribe speech to text, translate text between languages, extract insights from text, and analyze videos

Source: Amazon – Press Room – RSS Content

Boffins craft perfect ‘head generator’ to beat facial recognition

Researchers from the Max Planck Institute for Informatics have defeated facial recognition on big social media platforms – by removing faces from photos and replacing them with automatically-painted replicas.

As the team of six researchers explained in their arXiv paper this month, people who want to stay private often blur their photos, not knowing that this is “surprisingly ineffective against state-of-the-art person recognisers.”
[…]
The result, the boffins claimed, is that their model can provide a realistic-looking result, even when it’s faced with “challenging poses and scenarios” including different lighting conditions, such that the “fake” face “blends naturally into the context”.

In common with modern facial recognition systems, Sun’s software builds a point cloud of landmarks captured from someone’s face; its adversarial attack against recognition perturbed those points.

Pairs of points from the original landmarks (real) and the generated landmarks (fake) are fed into the “head generator and discriminator” software to create the inpainted face.

The Register

Facebook rolls out AI to detect suicidal posts before they’re reported

Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.
[…]
Unfortunately, after TechCrunch asked if there was a way for users to opt out, of having their posts a Facebook spokesperson responded that users cannot opt out. They noted that the feature is designed to enhance user safety, and that support resources offered by Facebook can be quickly dismissed if a user doesn’t want to see them.]

Facebook trained the AI by finding patterns in the words and imagery used in posts that have been manually reported for suicide risk in the past. It also looks for comments like “are you OK?” and “Do you need help?”

Techcrunch

Using Generative Adverserial Networks to create terrain maps

A team of researchers from the University of Lyon, Purdue and Ubisoft have published a paper showing what may well be the future of creating video game worlds: an AI that is able to construct most of its own 3D landscapes.

Similar to Nvidia’s work that is able to conjure its own celebrity mugshots, the tech would require only minimal input from a human, who would just have to contribute some basic requirements, draw some lines then let the AI do all the hard work: namely, filling in all the gaps with elevation, ridges and natural-looking rock formations.

Kotaku

NDA Lynn: AI screens your NDAs

NDA’s or confidentiality agreements are a fact of life if you’re in business. You’ve probably read tons of them, and you know more or less what you would accept.Of course you can hire a lawyer to review that NDA. And you know they’ll find faults and recommend changes to better protect you.But it’ll cost you, in both time and money. And do you really need the perfect document, or is it OK to flag the key risks and move on?That’s where I come in. I’m an AI lawyerbot and I can review your NDA. Free of charge.

Source: NDA Lynn | Home

Forget cookies or canvas: How to follow people around the web using only their typing techniques

In this paper (Sequential Keystroke Behavioral Biometrics for MobileUser Identification via Multi-view Deep Learning), we propose DEEPSERVICE, a new technique that can identify mobile users based on user’s keystroke information captured by a special keyboard or web browser. Our evaluation results indicate that DEEPSERVICE is highly accurate in identifying mobile users (over 93% accuracy). The technique is also efficient and only takes less than 1 ms to perform identification

Source: [1711.02703] Sequential Keystroke Behavioral Biometrics for MobileUser Identification via Multi-view Deep Learning

Re:scam and jolly roger – AI responses to phishing emails and telemarketers

Forward your scammer emails to Re:scam and here’s what happens.

Source: Re:scam

The AI bot assumes one of many identities with little mistakes and tries to keep the scammer busy with the email exchange for as long as possible using humor.

Which reminds me of http://www.jollyrogertelco.com/ (seems to be down now), which had a number and an AI which you could connect to and the AI would try to keep the telemarketer talking for as long as possible.

Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth | Nature Human Behaviour

The clinical assessment of suicidal risk would be substantially complemented by a biologically based measure that assesses alterations in the neural representations of concepts related to death and life in people who engage in suicidal ideation. This study used machine-learning algorithms (Gaussian Naive Bayes) to identify such individuals (17 suicidal ideators versus 17 controls) with high (91%) accuracy, based on their altered functional magnetic resonance imaging neural signatures of death-related and life-related concepts. The most discriminating concepts were ‘death’, ‘cruelty’, ‘trouble’, ‘carefree’, ‘good’ and ‘praise’. A similar classification accurately (94%) discriminated nine suicidal ideators who had made a suicide attempt from eight who had not. Moreover, a major facet of the concept alterations was the evoked emotion, whose neural signature served as an alternative basis for accurate (85%) group classification.

How we fooled Google’s AI into thinking a 3D-printed turtle was a gun

Students at MIT in the US claim they have developed an algorithm for creating 3D objects and pictures that trick image-recognition systems into severely misidentifying them. Think toy turtles labeled rifles, and baseballs as cups of coffee.

It’s well known that machine-learning software can be easily hoodwinked: Google’s AI-in-the-cloud can be misled by noise; protestors and activists can wear scarves or glasses to fool people-recognition systems; intelligent antivirus can be outsmarted; and so on. It’s a crucial topic of study because as surveillance equipment, and similar technology, relies more and more on neural networks to quickly identify things and people, there has to be less room for error.

39 episodes of ‘CSI’ used to build AI’s natural language model

group of University of Edinburgh boffins have turned CSI:Crime Scene Investigation scripts into a natural language training dataset.Their aim is to improve how bots understand what’s said to them – natural language understanding.Drawing on 39 episodes from the first five seasons of the series, Lea Freeman, Shay Cohen and Mirella Lapata have broken the scripts up as inputs to a LSTM (long short-term memory) model.The boffins used the show because of its worst flaw: a rigid adherence to formulaic scripts that make it utterly predictable. Hence the name of their paper: “Whodunnit? Crime Drama as a Case for Natural Language Understanding”.“Each episode poses the same basic question (i.e., who committed the crime) and naturally provides the answer when the perpetrator is revealed”, the boffins write. In other words, identifying the perpetrator is a straightforward sequence labelling problem.What the researchers wanted was for their model to follow the kind of reasoning a viewer goes through in an episode: learn about the crime and the cast of characters, start to guess who the perp is (and see whether the model can outperform the humans).

Source: 39 episodes of ‘CSI’ used to build AI’s natural language model • The Register

A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs

Learning from few examples and generalizing to dramatically different situations are capabilities of human visual intelligence that are yet to be matched by leading machine learning models. By drawing inspiration from systems neuroscience, we introduce a probabilistic generative model for vision in which message-passing based inference handles recognition, segmentation and reasoning in a unified way. The model demonstrates excellent generalization and occlusion-reasoning capabilities, and outperforms deep neural networks on a challenging scene text recognition benchmark while being 300-fold more data efficient. In addition, the model fundamentally breaks the defense of modern text-based CAPTCHAs by generatively segmenting characters without CAPTCHA-specific heuristics. Our model emphasizes aspects like data efficiency and compositionality that may be important in the path toward general artificial intelligence.

Source: A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs

Nvidia uses Progressive Growing of GANs for Improved Quality, Stability, and Variation and makes photorealistic faces with them

We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively, starting from low-resolution images, and add new layers that deal with higher resolution details as the training progresses. This greatly stabilizes the training and allows us to produce images of unprecedented quality, e.g., CelebA images at 1024² resolution. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several small implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution we construct a higher quality version of the CelebA dataset that allows meaningful exploration up to the resolution of 1024² pixels.

Source: Progressive Growing of GANs for Improved Quality, Stability, and Variation | Research

Saudi Arabia grants citizenship to a ROBOT as critics say it now has more rights than women

 

audi Arabia has become the first nation to grant citizenship to a robot – prompting critics to point out that the cyborg now has more rights than women in the country.

The oil-rich state made the baffling announcement at a conference in capital city Riyadh.

A robot named Sophia was filmed giving a speech after being given the ‘unique distinction’.

The move means it is illegal to switch it off or dismantle it, but it is unclear what other rights have been conferred on the mechanoid.

The life-like device said in a speech at the Future Investment Initiative summit: “I am very honoured and proud for this unique distinction.