New AI Go machine defeats old best Go AI by 100-0, learning without human input.

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Source: Mastering the game of Go without human knowledge : Nature : Nature Research

Artificial intelligence just made guessing your password a whole lot easier

Scientists have harnessed the power of artificial intelligence (AI) to create a program that, combined with existing tools, figured more than a quarter of the passwords from a set of more than 43 million LinkedIn profiles. Yet the researchers say the technology may also be used to beat baddies at their own game.
[…]
The Stevens team created a GAN it called PassGAN and compared it with two versions of hashCat and one version of John the Ripper. The scientists fed each tool tens of millions of leaked passwords from a gaming site called RockYou, and asked them to generate hundreds of millions of new passwords on their own. Then they counted how many of these new passwords matched a set of leaked passwords from LinkedIn, as a measure of how successful they’d be at cracking them.

On its own, PassGAN generated 12% of the passwords in the LinkedIn set, whereas its three competitors generated between 6% and 23%. But the best performance came from combining PassGAN and hashCat. Together, they were able to crack 27% of passwords in the LinkedIn set, the researchers reported this month in a draft paper posted on arXiv. Even failed passwords from PassGAN seemed pretty realistic: saddracula, santazone, coolarse18.

Source: Artificial intelligence just made guessing your password a whole lot easier

Introducing: Unity Machine Learning Agents for Tensorflow

Unity Machine Learning Agents

We call our solution Unity Machine Learning Agents (ML-Agents for short), and are happy to be releasing an open beta version of our SDK today! The ML-Agents SDK allows researchers and developers to transform games and simulations created using the Unity Editor into environments where intelligent agents can be trained using Deep Reinforcement Learning, Evolutionary Strategies, or other machine learning methods through a simple to use Python API. We are releasing this beta version of Unity ML-Agents as open-source software, with a set of example projects and baseline algorithms to get you started. As this is an initial beta release, we are actively looking for feedback, and encourage anyone interested to contribute on our GitHub page. For more information on ML-Agents, continue reading below! For more detailed documentation, see our GitHub Wiki.

Source: Introducing: Unity Machine Learning Agents – Unity Blog

AI’s can generate fake reviews indistinguishable from real reviews for both humans and fake review detectors

Fake reviews used to be crowdsourced. Now they can be auto-generated by AI, according to a new research paper shared by AmiMoJo:
In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect. Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors.

Humans marked these AI-generated reviews as useful at approximately the same rate as they did for real (human-authored) Yelp reviews.
Slashdot

A.I. can detect the sexual orientation of a person based on one photo, research shows

The Stanford University study, which is set to be published in the Journal of Personality and Social Psychology and was first reported in The Economist, found that machines had a far superior “gaydar” when compared to humans.

The machine intelligence tested in the research could correctly infer between gay and straight men 81 percent of the time, and 74 percent of the time for women. In contrast, human judges performed much worse than the sophisticated computer software, identifying the orientation of men 61 percent of the time and guessing correctly 54 percent of the time for women.

The research has prompted critics to question the possible use of this type of machine intelligence, both in terms of the ethics of facial-detection technology and whether it could be used to violate a person’s privacy.
[…]
When the AI reviewed five images of a person’s face, rather than one, the results were even more convincing – 91 percent of the time with men and 83 percent of the time with women.

The paper indicated its findings showed “strong support” for the theory that a person’s sexual orientation stems from the exposure to various hormones before birth. The AI’s success rate in comparison to human judges also appeared to back the concept that female sexual orientation is more fluid.

The researchers behind the study argued that with the appropriate data sets, similar AI tests could spot other personal traits such as an individual’s IQ or even their political views. However, Kosinski and Wang also warned of the potentially dangerous ramifications such AI machines could have on the LGBT community.

“Given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women,” Kosinski and Wang said in the report.

Source: A.I. can detect the sexual orientation of a person based on one photo, research shows

An A.I. Says There Are Six Main Kinds of Stories

That’s what a group of researchers, from the University of Vermont and the University of Adelaide, set out to do. They collected computer-generated story arcs for nearly 2,000 works of fiction, classifying each into one of six core types of narratives (based on what happens to the protagonist):

1. Rags to Riches (rise)

2. Riches to Rags (fall)

3. Man in a Hole (fall then rise)

4. Icarus (rise then fall)

5. Cinderella (rise then fall then rise)

6. Oedipus (fall then rise then fall)

Their focus was on the emotional trajectory of a story, not merely its plot. They also analyzed which emotional structure writers used most, and how that contrasted with the ones readers liked best, then published a preprint paper of their findings on the scholarship website arXiv.org. More on that in a minute.

First, the researchers had to find a workable dataset. Using a collection of fiction from the digital library Project Gutenberg, they selected 1,737 English-language works of fiction between 10,000 and 200,000 words long. 

Source: An A.I. Says There Are Six Main Kinds of Stories

Amazons Macie detects data leaks in S3 buckets using AI

Think of Macie as a data loss prevention agent, a DLPbot, that uses machine learning to understand a user’s pattern of access to data in S3 buckets. The buckets have permission levels and the data in a bucket can be ranked for sensitivity or risk, using items such as credit card numbers, and other sensitive personal information.

The software monitors users’ behaviour and profiles it. If there are changes in the pattern of that behaviour and they are directed towards high-risk data then Macie can alert admin staff to a potential breach risk.

For example, if a hacker successfully impersonates a valid user and then goes searching for data in unexpected places and/or from an unknown IP address then Macie can flag this unusual pattern of activity. The product could also identify a valid employee going rogue, say, generating a store of captured data ready to steal it.

Source: If there’s a hole in your S3 bucket, data thieves will be sprayed by Macie

OpenAI bot bursts into the ring, humiliates top Dota 2 pro gamer in ‘scary’ one-on-one bout

In a shock move on Friday evening, the software agent squared up to top Dota 2 pro gamer Dendi, a Ukrainian 27-year-old, at the Dota 2 world championships dubbed The International.

The OpenAI agent beat Dendi in less than 10 minutes in the first round, and trounced him again in a second round, securing victory in a best-of-three match. “This guy is scary,” a shocked Dendi told the huge crowd watching the battle at the event. Musk was jubilant.
[…]
According to OpenAI, its machine-learning bot was also able to pwn two other top human players earlier this week: SumaiL and Arteezy. Although it’s an impressive breakthrough, it’s important to note this popular strategy game is usually played not one-v-one but as a five-versus-five team game – a rather difficult environment for bots to handle.

Source: OpenAI bot bursts into the ring, humiliates top Dota 2 pro gamer in ‘scary’ one-on-one bout

MIT Real time automatic image retouching on your phone

System can apply a range of styles in real-time, so that the viewfinder displays the enhanced image.
[…]
at Siggraph, the premier digital graphics conference, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Google are presenting a new system that can automatically retouch images in the style of a professional photographer. It’s so energy-efficient, however, that it can run on a cellphone, and it’s so fast that it can display retouched images in real-time, so that the photographer can see the final version of the image while still framing the shot.

The same system can also speed up existing image-processing algorithms. In tests involving a new Google algorithm for producing high-dynamic-range images, which capture subtleties of color lost in standard digital images, the new system produced results that were visually indistinguishable from those of the algorithm in about one-tenth the time — again, fast enough for real-time display.

The system is a machine-learning system, meaning that it learns to perform tasks by analyzing training data; in this case, for each new task it learned, it was trained on thousands of pairs of images, raw and retouched.

Source: Automatic image retouching on your phone

Artificially intelligent painters invent new styles of art

The team – which also included researchers at Rutgers University in New Jersey and Facebook’s AI lab in California – modified a type of algorithm known as a generative adversarial network (GAN), in which two neural nets play off against each other to get better and better results. One creates a solution, the other judges it – and the algorithm loops back and forth until the desired result is reached.

In the art AI, one of these roles is played by a generator network, which creates images. The other is played by a discriminator network, which was trained on 81,500 paintings to tell the difference between images we would class as artworks and those we wouldn’t – such as a photo or diagram, say.

The discriminator was also trained to distinguish different styles of art, such as rococo or cubism.
Art with a twist

The clever twist is that the generator is primed to produce an image that the discriminator recognises as art, but which does not fall into any of the existing styles.

“You want to have something really creative and striking – but at the same time not go too far and make something that isn’t aesthetically pleasing,” says team member Ahmed Elgammal at Rutgers University.

Once the AI had produced a series of images, members of the public were asked to judge them alongside paintings by people in an online survey, without knowing which were the AI’s work. Participants answered questions about how complex or novel they felt each image was, and whether it inspired them or elevated their mood. To the researchers’ surprise, images produced by their AI scored slightly higher in many cases than those by humans.

New Scientist

Draw Together with a Neural Network

We made an interactive web experiment that lets you draw together with a recurrent neural network model called sketch-rnn. We taught this neural net to draw by training it on millions of doodles collected from the Quick, Draw! game. Once you start drawing an object, sketch-rnn will come up with many possible ways to continue drawing this object based on where you left off. Try the first demo.

tensorflow.org

Intel Launches Movidius Neural Compute Stick: Deep Learning and AI on a $79 USB Stick

Meanwhile, the on-chip memory has increased from 1 GB on the Fathom NCS to 4 GB LPDDR3 on the Movidius NCS, in order to facilitate larger and denser neural networks. And to cap it all off, Movidius has been able to reduce the MSRP to $79 – citing Intel’s “manufacturing and design expertise” – lowering the cost of entry even more.

Like other players in the edge inference market, Movidius is looking to promote and capitalize on the need for low-power but capable inference processors for stand-alone devices. That means targeting use cases where the latency of going to a server would be too great, a high-performance CPU too power hungry, or where privacy is a greater concern. In which case, the NCS and the underlying Myriad 2 VPU are Intel’s primary products for device manufacturers and software developers.

Source: Intel Launches Movidius Neural Compute Stick: Deep Learning and AI on a $79 USB Stick

AI quickly cooks malware that AV software can’t spot

Hyrum Anderson, technical director of data science at security shop Endgame, showed off research that his company had done in adapting Elon Musk’s OpenAI framework to the task of creating malware that security engines can’t spot.

The system basically learns how to tweak malicious binaries so that they can slip past antivirus tools and continue to work once unpacked and executed. Changing small sequences of bytes can fool AV engines, even ones that are also powered by artificial intelligence, he said. Anderson cited research by Google and others to show how changing just a few pixels in an image can cause classification software to mistake a bus for an ostrich.

“All machine learning models have blind spots,” he said. “Depending on how much knowledge a hacker has they can be convenient to exploit.”

So the team built a fairly simple mechanism to develop weaponised code by making very small changes to malware and firing these variants at an antivirus file scanner. By monitoring the response from the engine they were able to make lots of tiny tweaks that proved very effective at crafting software nasties that could evade security sensors.

The malware-tweaking machine-learning software was trained over 15 hours and 100,000 iterations, and then lobbed some samples at an antivirus classifier. The attacking code was able to get 16 per cent of its customized samples past the security system’s defenses, we’re told.

This software-generation software will be online at the firm’s Github page and Anderson encouraged people to give it a try. No doubt security firms will also be taking a long look at how this affects their products in the future

Source: AI quickly cooks malware that AV software can’t spot

Humanity uploaded an AI to Mars and lets it shoot rocks with lasers

AEGIS doesn’t cover general operations, which are still directed by humans. Instead it lets Curiosity pick its own targets on which to focus its ChemCam, an instrument that first vaporizes Martian rocks with a laser and then studies the resulting gases. AEGIS does so after analysing images captured by Curiosity’s NavCam, which snaps stereo images, and also using ChemCam’s own Remote Micro-Imager context camera. Once it detects a worthy target, ChemCam puts the nuclear-powered space tank’s laser to work eliminating Martian pebbles.

The paper says AEGIS now goes to work after most of Curiosity’s short drives across Mars, and “has proven useful in rapidly gathering geochemical measurements and making use of otherwise idle time between the end of the drive and the next planning cycle.” 54 slices of idle time to be precise, as that’s the number of occasions on which Curiosity’s had enough juice to run it.

The software is making good assessments of what to zap and sniff: the paper says “in a number of cases [AEGIS] has chosen rock targets which were among the same ones that were independently ranked highly by the science team for study.” The result is better-targeted work, as Curiosity was previously set to do blind targeting “at pre-selected angles with respect to the rover, without knowing what it would find at that position post-drive.” Now it’s focussing in on outcrops, a desirable target.

Source: Humanity uploaded an AI to Mars and lets it shoot rocks with lasers

Artificial intelligence can now predict suicide risk with remarkable accuracy

In trials, results have been 80-90% accurate when predicting whether someone will attempt suicide within the next two years, and 92% accurate in predicting whether someone will attempt suicide within the next week.

The prediction is based on data that’s widely available from all hospital admissions, including age, gender, zip codes, medications, and prior diagnoses. Walsh and his team gathered data on 5,167 patients from Vanderbilt University Medical Center that had been admitted with signs of self-harm or suicidal ideation. They read each of these cases to identify the 3,250 instances of suicide attempts.

This set of more than 5,000 cases was used to train the machine to identify those at risk of attempted suicide compared to those who committed self-harm but showed no evidence of suicidal intent. The researchers also built algorithms to predict attempted suicide among a group 12,695 randomly selected patients with no documented history of suicide attempts. It proved even more accurate at making suicide risk predictions within this large general population of patients admitted to the hospital.

Source: Artificial intelligence can now predict suicide risk with remarkable accuracy

Scientists Are Now Using AI to Predict Autism in Infants

Despite all the headway that science has made in understanding autism in recent years, knowing which children will one day develop autism is still almost impossible to predict. Children diagnosed with autism appear to behave normally until around two, and until then there is often no indication that anything is wrong.
[…]
In a paper out Wednesday in Science Translational Medicine, researchers from the University of North Carolina at Chapel Hill and Washington University School of Medicine scanned the brains of 59 high-risk, 6-month-old infants to examine how different regions of the brain connect and interact. At age two, after 11 of those infants had been diagnosed with autism, they scanned their brains again.
[…]
Using this method, researchers were able to accurately predict nine of the 11 infants who would wind up with an autism diagnosis. And it did not incorrectly predict any of the children who were not autistic.

“Our treatments of autism today have a modest impact at best,” said Joseph Piven, a psychiatrist at UNC Chapel Hill and author of the study, told Gizmodo. “People with autism continue to have challenges throughout their life. But there’s general consensus in the field that diagnosing earlier means better results.”

Source: Scientists Are Now Using AI to Predict Autism in Infants

AI-powered dynamic pricing turns its gaze to the fuel pumps

With the use of Artificial Intelligence PriceCast Fuel detects behavioral patterns in Big Data (all available data relevant to the sale) and relates to customer and competitor reactions with a frequency and level of accuracy that users of traditional pricing systems only can dream about,” the company explains in a brochure [PDF]. “Dynamically mapping customer and competitor behavior in order to identify the optimal route (price setting) throughout the day, makes it possible to relate to any given change in the local situation for a given station and re-route accordingly when necessary and within seconds.”

Source: AI-powered dynamic pricing turns its gaze to the fuel pumps

Google AI has access to 1.6m NHS patients data – without permission

The document – a data-sharing agreement between Google-owned artificial intelligence company DeepMind and the Royal Free NHS Trust – gives the clearest picture yet of what the company is doing and what sensitive data it now has access to.

The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals run by the Royal Free NHS Trust – Barnet, Chase Farm and the Royal Free – each year. This will include information about people who are HIV-positive, for instance, as well as details of drug overdoses and abortions. The agreement also includes access to patient data from the last five years.

Source: Revealed: Google AI has access to huge haul of NHS patient data | New Scientist

It goes beyond belief that this much patient data is given (sold?) to a commercial entity by the NHS without agreement from the people involved.

Real-Time User-Guided Image Colorization with Learned Deep Priors within minutes

We train on a million images, with simulated user inputs. To guide the user towards efficient input selection, the system recommends likely colors based on the input image and current user inputs. The colorization is performed in a single feed-forward pass, enabling real-time use. Even with randomly simulated user inputs, we show that the proposed system helps novice users quickly create realistic colorizations, and show large improvements in colorization quality with just a minute of use.

Source: Real-Time User-Guided Image Colorization with Learned Deep Priors. In SIGGRAPH, 2017.

This Artificially Intelligent Speech Generator Can Fake Anyone’s Voice

“We train our models on a huge dataset with thousands of speakers,” Jose Sotelo, a team member at Lyrebird and a speech synthesis expert, told Gizmodo. “Then, for a new speaker we compress their information in a small key that contains their voice DNA. We use this key to say new sentences.”

The end result is far from perfect—the samples still exhibit digital artifacts, clarity problems, and other weirdness—but there’s little doubt who is being imitated by the speech generator. Changes in intonation are also discernible. Unlike other systems, Lyrebird’s solution requires less data per speaker to produce a new voice, and it works in real time. The company plans to offer its tool to companies in need of speech synthesis solutions.
[…]
“We take seriously the potential malicious applications of our technology,” Sotelo told Gizmodo. “We want this technology to be used for good purposes: giving back the voice to people who lost it to sickness, being able to record yourself at different stages in your life and hearing your voice later on, etc. Since this technology could be developed by other groups with malicious purposes, we believe that the right thing to do is to make it public and well-known so we stop relying on audio recordings [as evidence].”

Source: This Artificially Intelligent Speech Generator Can Fake Anyone’s Voice

Caffe2 Open Source Brings Cross Platform Machine Learning Tools to Developers

We’re committed to providing the community with high-performance machine learning tools so that everyone can create intelligent apps and services. Caffe2 is shipping with tutorials and examples that demonstrate learning at massive scale which can leverage multiple GPUs in one machine or many machines with one or more GPUs. Learn to train and deploy models for iOS, Android, and Raspberry Pi. Pre-trained models from the Caffe2 Model Zoo can be run with just a few lines of code.

Caffe2 is deployed at Facebook to help developers and researchers train large machine learning models and deliver AI-powered experiences in our mobile apps. Now, developers will have access to many of the same tools, allowing them to run large-scale distributed training scenarios and build machine learning applications for mobile.

We’ve worked closely with NVIDIA, Qualcomm, Intel, Amazon, and Microsoft to optimize Caffe2 for both cloud and mobile environments. These collaborations will allow the machine learning community to rapidly experiment using more complex models and deploy the next generation of AI-enhanced apps and services. to optimize Caffe2 for both cloud and mobile environments. These collaborations will allow the machine learning community to rapidly experiment using more complex models and deploy the next generation of AI-enhanced apps and services.

Source: Caffe2 Open Source Brings Cross Platform Machine Learning Tools to Developers

AI Otto buys stock for ecommerce, decreases customer returns

The idea is to collect and analyse quantities of information to understand consumer tastes, recommend products to people and personalise websites for customers. Otto’s work stands out because it is already automating business decisions that go beyond customer management. The most important is trying to lower returns of products, which cost the firm millions of euros a year.

Its conventional data analysis showed that customers were less likely to return merchandise if it arrived within two days. Anything longer spelled trouble: a customer might spot the product in a shop for one euro less and buy it, forcing Otto to forgo the sale and eat the shipping costs.

But customers also dislike multiple shipments; they prefer to receive everything at once. Since Otto sells merchandise from other brands, and does not stock those goods itself, it is hard to avoid one of the two evils: shipping delays until all the orders are ready for fulfilment, or lots of boxes arriving at different times.
[…]
The AI system has proved so reliable—it predicts with 90% accuracy what will be sold within 30 days—that Otto allows it automatically to purchase around 200,000 items a month from third-party brands with no human intervention.
[…]
Overall, the surplus stock that Otto must hold has declined by a fifth. The new AI system has reduced product returns by more than 2m items a year.

Source: Automatic for the people: How Germany’s Otto uses artificial intelligence | The Economist