Generative AI models have a propensity for learning complex data distributions, which is why they’re great at producing human-like speech and convincing images of burgers and faces. But training these models requires lots of labeled data, and depending on the task at hand, the necessary corpora are sometimes in short supply.
The solution might lie in an approach proposed by researchers at Google and ETH Zurich. In a paper published on the preprint server Arxiv.org (“High-Fidelity Image Generation With Fewer Labels“), they describe a “semantic extractor” that can pull out features from training data, along with methods of inferring labels for an entire training set from a small subset of labeled images. These self- and semi-supervised techniques together, they say, can outperform state-of-the-art methods on popular benchmarks like ImageNet.
“In a nutshell, instead of providing hand-annotated ground truth labels for real images to the discriminator, we … provide inferred ones,” the paper’s authors explained.
In one of several unsupervised methods the researchers posit, they first extract a feature representation — a set of techniques for automatically discovering the representations needed for raw data classification — on a target training dataset using the aforementioned feature extractor. They then perform cluster analysis — i.e., grouping the representations in such a way that those in the same group share more in common than those in other groups. And lastly, they train a GAN — a two-part neural network consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples — by inferring labels.
Google today introduced TensorFlow Lite 1.0, its framework for developers deploying AI models on mobile and IoT devices. Improvements include selective registration and quantization during and after training for faster, smaller models. Quantization has led to 4 times compression of some models.
“We are going to fully support it. We’re not going to break things and make sure we guarantee its compatibility. I think a lot of people who deploy this on phones want those guarantees,” TensorFlow engineering director Rajat Monga told VentureBeat in a phone interview.
The TensorFlow Lite team at Google also shared its roadmap for the future today, designed to shrink and speed up AI models for edge deployment, including things like model acceleration, especially for Android developers using neural nets, as well as a Keras-based connecting pruning kit and additional quantization enhancements.
Other changes on the way:
Support for control flow, which is essential to the operation of models like recurrent neural networks
CPU performance optimization with Lite models, potentially involving partnerships with other companies
Expand coverage of GPU delegate operations and finalize the API to make it generally available
A TensorFlow 2.0 model converter to make Lite models will be made available for developers to better understand how things wrong in the conversion process and how to fix it.
TensorFlow Lite is deployed by more than two billion devices today, TensorFlow Lite engineer Raziel Alvarez said onstage at the TensorFlow Dev Summit being held at Google offices in Sunnyvale, California.
TensorFlow Lite increasingly makes TensorFlow Mobile obsolete, except for users who want to utilize it for training, but a solution is in the works, Alvarez said.
Wind power has become increasingly popular, but its success is limited by the fact that wind comes and goes as it pleases, making it hard for power grids to count on the renewable energy and less likely to fully embrace it. While we can’t control the wind, Google has an idea for the next best thing: using machine learning to predict it.
Google and DeepMind have started testing machine learning on Google’s own wind turbines, which are part of the company’s renewable energy projects. Beginning last year, they fed weather forecasts and existing turbine data into DeepMind’s machine learning platform, which churned out wind power predictions 36 hours ahead of actual power generation. Google could then make supply commitments to power grids a full day before delivery. That predictability makes it easier and more appealing for energy grids to depend on wind power, and as a result, it boosted the value of Google’s wind energy by roughly 20 percent.
IBM Watson Anywhere is built on top of Kubernetes, the open source orchestration engine that can be deployed in diverse environments. Since the Watson Anywhere platform is built as a set of microservices designed to run on Kubernetes, it is flexible and portable.
[…]
According to IBM, the microservices-based Watson Anywhere delivers two solutions –
Watson OpenScale: IBM’s open AI platform for managing multiple instances of AI, no matter where they were developed – including the ability to explain how AI decisions are being made in real time, for greater transparency and compliance.
Watson Assistant: IBM’s AI tool for building conversational interfaces into applications and devices. More advanced than a traditional chatbot, Watson Assistant intelligently determines when to search for a result, when to ask the user for clarification, and when to offload the user to a human for personal assistance. Also, the Watson Assistant Discovery Extension enables organizations to unlock hidden insights in unstructured data and documents.
IBM Cloud Private for Data is an extension of the hybrid cloud focused on data and analytics. According to IBM, it simplifies and unifies how customers collect, organize and analyze data to accelerate the value of data science and AI. The multi-cloud platform delivers a broad range of core data microservices, with the option to add more from a growing services catalog.
IBM Watson Anywhere is seamlessly integrated with Cloud Private for Data. The combination enables customers to manage end-to-end data workflows to help ensure that data is easily accessible for AI.
At a glance, the images featured on the website This Person Does Not Exist might seem like random high school portraits or vaguely inadvisable LinkedIn headshots. But every single photo on the site has been created by using a special kind of artificial intelligence algorithm called generative adversarial networks (GANs).
Every time the site is refreshed, a shockingly realistic — but totally fake —picture of a person’s face appears. Uber software engineer Phillip Wang created the page to demonstrate what GANs are capable of, and then posted it to the public Facebook group “Artificial Intelligence & Deep Learning” on Tuesday.
The underlying code that made this possible, titled StyleGAN, was written by Nvidia and featured in a paper that has yet to be peer-reviewed. This exact type of neural network has the potential to revolutionize video game and 3D-modeling technology, but, as with almost any kind of technology, it could also be used for more sinister purposes. Deepfakes, or computer-generated images superimposed on existing pictures or videos, can be used to push fake news narratives or other hoaxes. That’s precisely why Wang chose to create the mesmerizing but also chilling website.
a Beijing-based online education start-up has developed an artificial intelligence-powered maths app that can check children’s arithmetic problems through the simple snap of a photo. Based on the image and its internal database, the app automatically checks whether the answers are right or wrong.
Known as Xiaoyuan Kousuan, the free app launched by the Tencent Holdings-backed online education firm Yuanfudao, has gained increasing popularity in China since its launch a year ago and claims to have checked an average of 70 million arithmetic problems per day, saving users around 40,000 hours of time in total.
Yuanfudao is also trying to build the country’s biggest education-related database generated from the everyday experiences of real students. Using this, the six-year-old company – which has a long line of big-name investors including Warburg Pincus, IDG Capital and Matrix Partners China – aims to reinvent how children are taught in China.
“By checking nearly 100 million problems every day, we have developed a deep understanding of the kind of mistakes students make when facing certain problems,” said Li Xin, co-founder of Yuanfudao – which means “ape tutor” in Chinese – in a recent interview. “The data gathered through the app can serve as a pillar for us to provide better online education courses.”
Video game publisher Ubisoft is working with Mozilla to develop an artificial intelligence coding assistant called Clever-Commit, head of Ubisoft La Forge Yves Jacquier announced during DICE Summit 2019 on Tuesday.
Clever-Commit reportedly helps programmers evaluate whether or not a code change will introduce a new bug by learning from past bugs and fixes. The prototype, called Commit-Assistant, was tested using data collected during game development, Ubisoft said, and it’s already contributing to some major AAA titles. The publisher is also working on integrating it into other brands.
“Working with Mozilla on Clever-Commit allows us to support other programming languages and increase the overall performances of the technology. Using this tech in our games and Firefox will allow developers to be more productive as they can spend more time creating the next feature rather than fixing bugs. Ultimately, this will allow us to create even better experiences for our gamers and increase the frequency of our game updates,” said Mathieu Nayrolles, technical architect, data scientist, and member of the Technological Group at Ubisoft Montreal.
Mozilla is assisting Ubisoft by providing programming language expertise in Rust, C++, and Javascript. The technology will also help the company ship more stable versions of its Firefox internet browser.
Imagine using machine learning to ensure that the pieces of an aircraft fit together more precisely, and can be assembled with less testing and time. That is one of the uses behind new technology being developed by researchers at Purdue University and the University of Southern California.
“We’re really taking a giant leap and working on the future of manufacturing,” said Arman Sabbaghi, an assistant professor of statistics in Purdue’s College of Science, who led the research team at Purdue with support from the National Science Foundation. “We have developed automated machine learning technology to help improve additive manufacturing. This kind of innovation is heading on the path to essentially allowing anyone to be a manufacturer.”
The technology addresses a current significant challenge within additive manufacturing: individual parts that are produced need to have a high degree of precision and reproducibility. The technology allows a user to run the software component locally within their current network, exposing an API, or programming interface. The software uses machine learning to analyze the product data and create plans to manufacture the needed pieces with greater accuracy.
“This has applications for many industries, such as aerospace, where exact geometric dimensions are crucial to ensure reliability and safety,” Sabbaghi said. “This has been the first time where I’ve been able to see my statistical work really make a difference and it’s the most incredible feeling in the world.”
The researchers have developed a new model-building algorithm and computer application for geometric accuracy control in additive manufacturing systems. Additive manufacturing, commonly known as 3-D printing, is a growing industry that involves building components in a way that is similar to an inkjet printer where parts are ‘grown’ from the building surface.
Additive manufacturing has progressed from a prototype development tool to one that can now offer numerous competitive advantages. Those advantages include shape complexity, waste reduction and potentially less expensive manufacturing, compared to traditional subtractive manufacturing where the process involves starting with the raw material and chipping away at it to produce a final result.
Wohlers Associates estimates that additive manufacturing is a $7.3 billion industry.
“We use machine learning technology to quickly correct computer-aided design models and produce parts with improved geometric accuracy,” Sabbaghi said. The improved accuracy ensures that the produced parts are within the needed tolerances and that every part produced is consistent and will perform that same way, whether it was created on a different machine or 12 months later
AdScan biedt elke adverteerder een snelle gratis pre-test om in kaart te brengen welke elementen beter of minder scoren en daarmee effect hebben op de ontvangst en het effect van die specifieke commercial.
AdScan is een machine learning-tool die op basis van de inhoud van reclames een voorspelling kan doen over hoe een panel van honderd mensen een reclame beoordeelt. AdScan combineert daarbij historische paneldata, computerpatronen en slimme algoritmes om zo tot een analyse te komen.
De reclamewaarderingstool levert binnen 20 minuten een adviesrapport dat kan bijdragen aan het succes van een campagne. AdScan stelt dan vast of een reclame lager, gemiddeld of hoger dan de benchmark scoort en welke elementen je aan kunt passen om tot een hogere score te komen.
McCormick — the maker of Old Bay and other seasonings, spices and condiments — hopes the technology can help it tantalize taste buds. It worked with IBM Research to build an AI system trained on decades worth of data about spices and flavors to come up with new flavor combinations.
The Baltimore, Maryland-based company plans to bring its first batch of AI-assisted products to market later this year. The line of seasoning mixes, called One, for making one-dish meals, includes flavors such as Tuscan Chicken and Bourbon Pork Tenderloin.
Hamed Faridi, McCormick’s chief science officer, told CNN Business that using AI cuts down product development time, and that the company plans to use the technology to help develop all new products by the end of 2021.
Columbia Engineering researchers have made a major advance in robotics by creating a robot that learns what it is, from scratch, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot does not know if it is a spider, a snake, an arm–it has no clue what its shape is. After a brief period of “babbling,” and within about a day of intensive computing, their robot creates a self-simulation. The robot can then use that self-simulator internally to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its own body. The work is published today in Science Robotics.
To date, robots have operated by having a human explicitly model the robot. “But if we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators, then it’s essential that they learn to simulate themselves,” says Hod Lipson, professor of mechanical engineering, and director of the Creative Machines lab, where the research was done.
A robot built by a team of researchers at MIT in America has two prongs for fingers, sensors in its wrist, and a camera for eyes.
As the AI-powered bot surveys the tower, one of its prongs is told by software to poke a block, which sends feedback to its sensor to work out how movable that particular block is. If it’s too stiff, the robot will try another block, and keep pushing in millimetre increments until it has protruded far enough to be removed and placed on top of the tower.
Prodding until you find a suitable block to push may seem like cheating, but, well, given the state of 2019 so far, we’ll take a rule-stretching robot any day. Here it is in action…
“Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces,” said Alberto Rodriguez, an assistant professor of mechanical engineering at MIT, this week.
“It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks. This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower. The key challenge is to learn from a relatively small number of experiments by exploiting common sense about objects and physics.”
The Remako HD Graphics Mod is a mod that completely revamps the pre-rendered backgrounds of the classic JRPG Final Fantasy VII. All of the backgrounds now have 4 times the resolution of the original.
Using state of the art AI neural networks, this upscaling tries to emulate the detail the original renders would have had. This helps the new visuals to come as close to a higher resolution re-rendering of the original as possible with current technology.
What does it look like?
Bbelow are two trailers. One is a comparison of the raw images, while the other shows off the mod in action.
To advance the state-of-the-art in speech neuroprosthesis, we combined the recent advances in deep learning with the latest innovations in speech synthesis technologies to reconstruct closed-set intelligible speech from the human auditory cortex. We investigated the dependence of reconstruction accuracy on linear and nonlinear (deep neural network) regression methods and the acoustic representation that is used as the target of reconstruction, including auditory spectrogram and speech synthesis parameters. In addition, we compared the reconstruction accuracy from low and high neural frequency ranges. Our results show that a deep neural network model that directly estimates the parameters of a speech synthesizer from all neural frequencies achieves the highest subjective and objective scores on a digit recognition task, improving the intelligibility by 65% over the baseline method which used linear regression to reconstruct the auditory spectrogram
Now, we introduce our StarCraft II program AlphaStar, the first Artificial Intelligence to defeat a top professional player. In a series of test matches held on 19 December, AlphaStar decisively beat Team Liquid’s Grzegorz “MaNa” Komincz, one of the world’s strongest professional StarCraft players, 5-0, following a successful benchmark match against his team-mate Dario “TLO” Wünsch. The matches took place under professional match conditions on a competitive ladder map and without any game restrictions.
Although there have been significant successes in video games such as Atari, Mario, Quake III Arena Capture the Flag, and Dota 2, until now, AI techniques have struggled to cope with the complexity of StarCraft. The best results were made possible by hand-crafting major elements of the system, imposing significant restrictions on the game rules, giving systems superhuman capabilities, or by playing on simplified maps. Even with these modifications, no system has come anywhere close to rivalling the skill of professional players. In contrast, AlphaStar plays the full game of StarCraft II, using a deep neural network that is trained directly from raw game data by supervised learning and reinforcement learning.
The Victoria Police are the primary law enforcement agency of Victoria, Australia. With over 16,000 vehicles stolen in Victoria this past year — at a cost of about $170 million — the police department is experimenting with a variety of technology-driven solutions to crackdown on car theft. They call this system BlueNet.
To help prevent fraudulent sales of stolen vehicles, there is already a VicRoads web-based service for checking the status of vehicle registrations. The department has also invested in a stationary license plate scanner — a fixed tripod camera which scans passing traffic to automatically identify stolen vehicles.
Don’t ask me why, but one afternoon I had the desire to prototype a vehicle-mounted license plate scanner that would automatically notify you if a vehicle had been stolen or was unregistered. Understanding that these individual components existed, I wondered how difficult it would be to wire them together.
But it was after a bit of googling that I discovered the Victoria Police had recently undergone a trial of a similar device, and the estimated cost of roll out was somewhere in the vicinity of $86,000,000. One astute commenter pointed out that the $86M cost to fit out 220 vehicles comes in at a rather thirsty $390,909 per vehicle.
Surely we can do a bit better than that.
Existing stationary license plate recognition systems
The Success Criteria
Before getting started, I outlined a few key requirements for product design.
Requirement #1: The image processing must be performed locally
Streaming live video to a central processing warehouse seemed the least efficient approach to solving this problem. Besides the whopping bill for data traffic, you’re also introducing network latency into a process which may already be quite slow.
Although a centralized machine learning algorithm is only going to get more accurate over time, I wanted to learn if an local on-device implementation would be “good enough”.
Requirement #2: It must work with low quality images
Since I don’t have a Raspberry Pi camera or USB webcam, so I’ll be using dashcam footage — it’s readily available and an ideal source of sample data. As an added bonus, dashcam video represents the overall quality of footage you’d expect from vehicle mounted cameras.
Requirement #3: It needs to be built using open source technology
Relying upon a proprietary software means you’ll get stung every time you request a change or enhancement — and the stinging will continue for every request made thereafter. Using open source technology is a no-brainer.
My solution
At a high level, my solution takes an image from a dashcam video, pumps it through an open source license plate recognition system installed locally on the device, queries the registration check service, and then returns the results for display.
The data returned to the device installed in the law enforcement vehicle includes the vehicle’s make and model (which it only uses to verify whether the plates have been stolen), the registration status, and any notifications of the vehicle being reported stolen.
If that sounds rather simple, it’s because it really is. For example, the image processing can all be handled by the openalpr library.
This is really all that’s involved to recognize the characters on a license plate:
A Minor Caveat
Public access to the VicRoads APIs is not available, so license plate checks occur via web scraping for this prototype. While generally frowned upon — this is a proof of concept and I’m not slamming anyone’s servers.
Here’s what the dirtiness of my proof-of-concept scraping looks like:
Results
I must say I was pleasantly surprised.
I expected the open source license plate recognition to be pretty rubbish. Additionally, the image recognition algorithms are probably not optimised for Australian license plates.
The solution was able to recognise license plates in a wide field of view.
Annotations added for effect. Number plate identified despite reflections and lens distortion.
Although, the solution would occasionally have issues with particular letters.
Incorrect reading of plate, mistook the M for an H
But … the solution would eventually get them correct.
A few frames later, the M is correctly identified and at a higher confidence rating
As you can see in the above two images, processing the image a couple of frames later jumped from a confidence rating of 87% to a hair over 91%.
I’m confident, pardon the pun, that the accuracy could be improved by increasing the sample rate, and then sorting by the highest confidence rating. Alternatively a threshold could be set that only accepts a confidence of greater than 90% before going on to validate the registration number.
Those are very straight forward code-first fixes, and don’t preclude the training of the license plate recognition software with a local data set.
The $86,000,000 Question
To be fair, I have absolutely no clue what the $86M figure includes — nor can I speak to the accuracy of my open source tool with no localized training vs. the pilot BlueNet system.
I would expect part of that budget includes the replacement of several legacy databases and software applications to support the high frequency, low latency querying of license plates several times per second, per vehicle.
On the other hand, the cost of ~$391k per vehicle seems pretty rich — especially if the BlueNet isn’t particularly accurate and there are no large scale IT projects to decommission or upgrade dependent systems.
Future Applications
While it’s easy to get caught up in the Orwellian nature of an “always on” network of license plate snitchers, there are many positive applications of this technology. Imagine a passive system scanning fellow motorists for an abductors car that automatically alerts authorities and family members to their current location and direction.
Teslas vehicles are already brimming with cameras and sensors with the ability to receive OTA updates — imagine turning these into a fleet of virtual good samaritans. Ubers and Lyft drivers could also be outfitted with these devices to dramatically increase the coverage area.
Using open source technology and existing components, it seems possible to offer a solution that provides a much higher rate of return — for an investment much less than $86M.
TAUS, the language data network, is an independent and neutral industry organization. We develop communities through a program of events and online user groups and by sharing knowledge, metrics and data that help all stakeholders in the translation industry develop a better service. We provide data services to buyers and providers of language and translation services.
The shared knowledge and data help TAUS members decide on effective localization strategies. The metrics support more efficient processes and the normalization of quality evaluation. The data lead to improved translation automation.
TAUS develops APIs that give members access to services like DQF, the DQF Dashboard and the TAUS Data Market through their own translation platforms and tools. TAUS metrics and data are already built in to most of the major translation technologies.
Robots normally need to be programmed in order to get them to perform a particular task, but they can be coaxed into writing the instructions themselves with the help of machine learning, according to research published in Science.
Engineers at Vicarious AI, a robotics startup based in California, USA, have built what they call a “visual cognitive computer” (VCC), a software platform connected to a camera system and a robot gripper. Given a set of visual clues, the VCC writes a short program of instructions to be followed by the robot so it knows how to move its gripper to do simple tasks.
“Humans are good at inferring the concepts conveyed in a pair of images and then applying them in a completely different setting,” the paper states.
“The human-inferred concepts are at a sufficiently high level to be effortlessly applied in situations that look very different, a capacity so natural that it is used by IKEA and LEGO to make language-independent assembly instructions.”
Don’t get your hopes up, however, these robots can’t put your flat-pack table or chair together for you quite yet. But it can do very basic jobs, like moving a block backwards and forwards.
It works like this. First, an input and output image are given to the system. The input image is a jumble of colored objects of various shapes and sizes, and the output image is an ordered arrangement of the objects. For example, the input image could be a number of red blocks and the output image is all the red blocks ordered to form a circle. Think of it a bit like a before and after image.
The VCC works out what commands need to be performed by the robot in order to organise the range of objects before it, based on the ‘before’ to the ‘after’ image. The system is trained to learn what action corresponds to what command using supervised learning.
Dileep George, cofounder of Vicarious, explained to The Register, “up to ten pairs [of images are used] for training, and ten pairs for testing. Most concepts are learned with only about five examples.”
Here’s a diagram of how it works:
A: A graph describing the robot’s components. B: The list of commands the VCC can use. Image credit: Vicarious AI
The left hand side is a schematic of all the different parts that control the robot. The visual hierarchy looks at the objects in front of the camera and categorizes them by object shape and colour. The attention controller decides what objects to focus on, whilst the fixation controller directs the robot’s gaze to the objects before the hand controller operates the robot’s arms to move the objects about.
The robot doesn’t need too many training examples to work because there are only 24 commands, listed on the right hand of the diagram, for the VCC controller.
AI systems excel in pattern recognition, so much so that they can stalk individual zebrafish and fruit flies even when the animals are in groups of up to a hundred.
To demonstrate this, a group of researchers from the Champalimaud Foundation, a private biomedical research lab in Portugal, trained two convolutional neural networks to identify and track individual animals within a group. The aim is not so much to match or exceed humans’ ability to spot and follow stuff, but rather to automate the process of studying the behavior of animals in their communities.
“The ultimate goal of our team is understanding group behavior,” said Gonzalo de Polavieja. “We want to understand how animals in a group decide together and learn together.”
The resulting machine-learning software, known as idtracker.ai, is described as “a species-agnostic system.” It’s “able to track all individuals in both small and large collectives (up to 100 individuals) with high identification accuracy—often greater than 99.9 per cent,” according to a paper published in Nature Methods on Monday.
The idtracker.ai software is split into a crossing-detector network and an identification network. First, it was fed video footage of the animals interacting in their enclosures. For example in the zebrafish experiment, the system pre-processes the fish as coloured blobs and learns to identify the animals as individuals or which ones are touching one another or crossing past each other in groups. The identification network is then used to identify the individual animals during each crossing event.
Surprisingly, it reached an accuracy rate of up to 99.96 per cent for groups of 60 zebrafish and increased to 99.99 per cent for 100 zebrafish. Recognizing fruit flies is harder. Idtracker.ai was accurate to 99.99 per cent for 38 fruit flies, but decreased slightly to 99.95 per cent for 72 fruit flies.
As good as they are at causing mischief, researchers from the MIT-IBM Watson AI Lab realized GANs are also a powerful tool: because they paint what they’re “thinking,” they could give humans insight into how neural networks learn and reason. This has been something the broader research community has sought for a long time—and it’s become more important with our increasing reliance on algorithms.
“There’s a chance for us to learn what a network knows from trying to re-create the visual world,” says David Bau, an MIT PhD student who worked on the project.
So the researchers began probing a GAN’s learning mechanics by feeding it various photos of scenery—trees, grass, buildings, and sky. They wanted to see whether it would learn to organize the pixels into sensible groups without being explicitly told how.
Stunningly, over time, it did. By turning “on” and “off” various “neurons” and asking the GAN to paint what it thought, the researchers found distinct neuron clusters that had learned to represent a tree, for example. Other clusters represented grass, while still others represented walls or doors. In other words, it had managed to group tree pixels with tree pixels and door pixels with door pixels regardless of how these objects changed color from photo to photo in the training set.
The GAN knows not to paint any doors in the sky.
MIT Computer Science & Artificial Intelligence Laboratory
“These GANs are learning concepts very closely reminiscent of concepts that humans have given words to,” says Bau.
Not only that, but the GAN seemed to know what kind of door to paint depending on the type of wall pictured in an image. It would paint a Georgian-style door on a brick building with Georgian architecture, or a stone door on a Gothic building. It also refused to paint any doors on a piece of sky. Without being told, the GAN had somehow grasped certain unspoken truths about the world.
This was a big revelation for the research team. “There are certain aspects of common sense that are emerging,” says Bau. “It’s been unclear before now whether there was any way of learning this kind of thing [through deep learning].” That it is possible suggests that deep learning can get us closer to how our brains work than we previously thought—though that’s still nowhere near any form of human-level intelligence.
Other research groups have begun to find similar learning behaviors in networks handling other types of data, according to Bau. In language research, for example, people have found neuron clusters for plural words and gender pronouns.
Being able to identify which clusters correspond to which concepts makes it possible to control the neural network’s output. Bau’s group can turn on just the tree neurons, for example, to make the GAN paint trees, or turn on just the door neurons to make it paint doors. Language networks, similarly, can be manipulated to change their output—say, to swap the gender of the pronouns while translating from one language to another. “We’re starting to enable the ability for a person to do interventions to cause different outputs,” Bau says.
The team has now released an app called GANpaint that turns this newfound ability into an artistic tool. It allows you to turn on specific neuron clusters to paint scenes of buildings in grassy fields with lots of doors. Beyond its silliness as a playful outlet, it also speaks to the greater potential of this research.
“The problem with AI is that in asking it to do a task for you, you’re giving it an enormous amount of trust,” says Bau. “You give it your input, it does it’s ‘genius’ thinking, and it gives you some output. Even if you had a human expert who is super smart, that’s not how you’d want to work with them either.”
With GANpaint, you begin to peel back the lid on the black box and establish some kind of relationship. “You can figure out what happens if you do this, or what happens if you do that,” says Hendrik Strobelt, the creator of the app. “As soon as you can play with this stuff, you gain more trust in its capabilities and also its boundaries.”
A video software firm has come up with a way to prevent people from sharing their account details for Netflix and other streaming services with friends and family members.
UK-based Synamedia unveiled the artificial intelligence software at the CES 2019 technology trade show in Las Vegas, claiming it could save the streaming industry billions of dollars over the next few years.
Casual password sharing is practised by more than a quarter of millennials, according to figures from market research company Magid.
Separate figures from research firm Parks Associates predicts that by $9.9 billion (£7.7bn) of pay-TV revenues and $1.2 billion of revenue from subscription-based streaming services will be lost to credential sharing each year.
The AI system developed by Synamedia uses machine learning to analyse account activity and recognise unusual patterns, such as account details being used in two locations within similar time periods.
The idea is to spot instances of customers sharing their account credentials illegally and offering them a premium shared account service that will authorise a limited level of password sharing.
“Casual credentials sharing is becoming too expensive to ignore. Our new solution gives operators the ability to take action,” said Jean Marc Racine, Synamedia’s chief product officer.
“Many casual users will be happy to pay an additional fee for a premium, shared service with a greater number of concurrent users. It’s a great way to keep honest people honest while benefiting from an incremental revenue stream.”
Artificial intelligence can potentially identify someone’s genetic disorders by inspecting a picture of their face, according to a paper published in Nature Medicine this week.
The tech relies on the fact some genetic conditions impact not just a person’s health, mental function, and behaviour, but sometimes are accompanied with distinct facial characteristics. For example, people with Down Syndrome are more likely to have angled eyes, a flatter nose and head, or abnormally shaped teeth. Other disorders like Noonan Syndrome are distinguished by having a wide forehead, a large gap between the eyes, or a small jaw. You get the idea.
An international group of researchers, led by US-based FDNA, turned to machine-learning software to study genetic mutations, and believe that machines can help doctors diagnose patients with genetic disorders using their headshots.
The team used 17,106 faces to train a convolutional neural network (CNN), commonly used in computer vision tasks, to screen for 216 genetic syndromes. The images were obtained from two sources: publicly available medical reference libraries, and snaps submitted by users of a smartphone app called Face2Gene, developed by FDNA.
Given an image, the system, dubbed DeepGestalt, studies a person’s face to make a note of the size and shape of their eyes, nose, and mouth. Next, the face is split into regions, and each piece is fed into the CNN. The pixels in each region of the face are represented as vectors and mapped to a set of features that are commonly associated with the genetic disorders learned by the neural network during its training process.
DeepGestalt then assigns a score per syndrome for each region, and collects these results to compile a list of its top 10 genetic disorder guesses from that submitted face.
An example of how DeepGestalt works. First, the input image is analysed using landmarks and sectioned into different regions before the system spits out its top 10 predictions. Image credit: Nature and Gurovich et al.
The first answer is the genetic disorder DeepGestalt believes the patient is most likely affected by, all the way down to its tenth answer, which is the tenth most likely disorder.
When it was tested on two independent datasets, the system accurately guessed the correct genetic disorder among its top 10 suggestions around 90 per cent of the time. At first glance, the results seem promising. The paper also mentions DeepGestalt “outperformed clinicians in three initial experiments, two with the goal of distinguishing subjects with a target syndrome from other syndromes, and one of separating different genetic subtypes in Noonan Syndrome.”
There’s always a but
A closer look, though, reveals that the lofty claims involve training and testing the system on limited datasets – in other words, if you stray outside the software’s comfort zone, and show it unfamiliar faces, it probably won’t perform that well. The authors admit previous similar studies “have used small-scale data for training, typically up to 200 images, which are small for deep-learning models.” Although they use a total of more than 17,000 training images, when spread across 216 genetic syndromes, the training dataset for each one ends up being pretty small.
For example, the model that examined Noonan Syndrome was only trained on 278 images. The datasets DeepGestalt were tested against were similarly small. One only contained 502 patient images, and the other 392.
Learning in environments with large state and action spaces, and sparse rewards, can hinder a Reinforcement Learning (RL) agent’s learning through trial-and-error. For instance, following natural language instructions on the Web (such as booking a flight ticket) leads to RL settings where input vocabulary and number of actionable elements on a page can grow very large. Even though recent approaches improve the success rate on relatively simple environments with the help of human demonstrations to guide the exploration, they still fail in environments where the set of possible instructions can reach millions. We approach the aforementioned problems from a different perspective and propose guided RL approaches that can generate unbounded amount of experience for an agent to learn from. Instead of learning from a complicated instruction with a large vocabulary, we decompose it into multiple sub-instructions and schedule a curriculum in which an agent is tasked with a gradually increasing subset of these relatively easier sub-instructions. In addition, when the expert demonstrations are not available, we propose a novel meta-learning framework that generates new instruction following tasks and trains the agent more effectively. We train DQN, deep reinforcement learning agent, with Q-value function approximated with a novel QWeb neural network architecture on these smaller, synthetic instructions. We evaluate the ability of our agent to generalize to new instructions on World of Bits benchmark, on forms with up to 100 elements, supporting 14 million possible instructions. The QWeb agent outperforms the baseline without using any human demonstration achieving 100% success rate on several difficult environments.
A team of researchers in Japan have devised an artificial intelligence (AI) system that can identify different types of cancer cells using microscopy images. Their method can also be used to determine whether the cancer cells are sensitive to radiotherapy. The researchers reported their findings in the journal Cancer Research. In cancer patients, there can be tremendous variation in the types of cancer cells in a single tumor. Identifying the specific cell types present in tumors can be very useful when choosing the most effective treatment. However, making accurate assessments of cell types is time consuming and often hampered by human error and the limits of human sight. To overcome these challenges, scientists led by Professor Hideshi Ishii of Osaka University, Japan, have developed an AI system that can identify different types of cancer cells from microscopy images, achieving higher accuracy than human judgement. The system is based on a convolutional neural network, a form of AI modeled on the human visual system. “We first trained our system on 8,000 images of cells obtained from a phase-contrast microscope,” said corresponding author Ishii. “We then tested [the AI system’s] accuracy on another 2,000 images and showed that it had learned the features that distinguish mouse cancer cells from human ones, and radioresistant cancer cells from radiosensitive ones.” The researchers noted that the automation and high accuracy of their system could be very useful for determining exactly which cells are present in a tumor or circulating in the body. Knowing whether or not radioresistant cells are present is vital when deciding whether radiotherapy would be effective. Furthermore, the same procedure can be applied post-treatment to assess patient outcomes. In the future, the team hopes to train the system on more cancer cell types, with the eventual goal of establishing a universal system that can automatically identify and distinguish all variants of cancer cells. The article can be found at: Toratani et al. (2018) A Convolutional Neural Network Uses Microscopic Images to Differentiate between Mouse and Human Cell Lines and Their Radioresistant Clones. Read more from Asian Scientist Magazine at: https://www.asianscientist.com/2018/12/in-the-lab/artificial-intelligence-microscopy-cancer-cell-radiotherapy/
The Descartes Labs tree canopy layer around the Baltimore Beltway. Treeless main roads radiate from the dense pavement of the city to leafy suburbs.
All this fuss is not without good reason. Trees are great! They make oxygen for breathing, suck up CO₂, provide shade, reduce noise pollution, and just look at them — they’re beautiful!
[…]
So Descartes Labs built a machine learning model to identify tree canopy using a combination of lidar, aerial imagery and satellite imagery. Here’s the area surrounding the Boston Common, for example. We clearly see that the Public Garden, Common and Commonwealth Avenue all have lots of trees. But we also see some other fun artifacts. The trees in front of the CVS in Downtown Crossing, for instance, might seem inconsequential to a passer-by, but they’re one of the biggest concentrations of trees in the neighborhood.
[…]
The classifier can be run over any location in the world where we have approximately 1-meter resolution imagery. When using NAIP imagery, for instance, the resolution of the tree canopy map is as high as 60cm. Drone imagery would obviously yield an even higher resolution.
Washington, D.C. tree canopy created with NAIP source imagery shown at different scales—all the way down to individual “TREES!” on The Ellipse.
The ability to map tree canopy at a such a high resolution in areas that can’t be easily reached on foot would be helpful for utility companies to pinpoint encroachment issues—or for municipalities to find possible trouble spots beyond their official tree census (if they even have one). But by zooming out to a city level, patterns in the tree canopy show off urban greenspace quirks. For example, unexpected tree deserts can be identified and neighborhoods that would most benefit from a surge of saplings revealed.