Boffins harnessed the brain power of mice to build AI models that can’t be fooled

researchers recorded the brain activity of mice staring at images and used the data to help make computer vision models more robust against adversarial attacks.

Convolutional neural networks (CNNs) used for object recognition in images are all susceptible to adversarial examples. These inputs have been tweaked in some way, whether its adding random noise or changing a few pixels here or there, that forces a model to incorrectly recognize an object. Adversarial attacks cause these systems to mistake an image of a banana for a toaster, or a toy turtle for a rifle.

[…]

, a group of researchers led by Baylor College of Medicine, Texas, have turned to mice for inspiration, according to a paper released on arXiv.

“We presented natural images to mice and measured the responses of thousands of neurons from cortical visual areas,” they wrote.

“Next, we denoised the notoriously variable neural activity using strong predictive models trained on this large corpus of responses from the mouse visual system, and calculated the representational similarity for millions of pairs of images from the model’s predictions.”

As you can tell the paper is pretty jargony. In simple terms, the researchers recorded the brain activity of the mice staring at thousands of images and used that data to build a similar computational system that models that activity. To make sure the mice were looking at the image, they were “head-fixed” and put on a treadmill.

[…]

When the CNN was tasked with classifying a different set of images that were not presented to the mice, its accuracy was comparable to a ResNet-18 model that had not been regularized. But as the researchers began adding random noise to those images, the performance of the unregularized models dropped more drastically compared to the regularized version.

“We observed that the CNN model becomes more robust to random noise when neural regularization is used,” the paper said. In other words, the mice-hardened ResNet-18 model is less likely to be fooled by adversarial examples if it contains features that have been borrowed by real biological mouse brains.

The researchers believe that incorporating these “brain-like representations” into machine learning models could help them reach “human-like performance” one day. But although the results seem promising, the researchers have no idea how it really works.

“While our results indeed show the benefit of adopting more brain-like representation in visual processing, it is however unclear which aspects of neural representation make it work. We think that it is the most important question and we need to understand the principle behind it,” they concluded

Source: Boffins harnessed the brain power of mice to build AI models that can’t be fooled • The Register

Ancestry Taps AI To Sift Through Millions of Obituaries

Algorithms identified death notices in old newspaper pages, then another set of algorithms pulled names and other key details into a searchable database. From a report: Ancestry used artificial intelligence to extract obituary details hidden in a half-billion digitized newspaper pages dating back to 1690, data invaluable for customers building their family trees. The family history and consumer-genomics company, based in Lehi, Utah, began the project in late 2017 and introduced the new functionality last month. Through its subsidiary Newspapers.com, the company had a trove of newspaper pages, including obituaries — but it said that manually finding and importing those death notices to Ancestry.com in a form that was usable for customers would likely have taken years. Instead, Ancestry tasked its 24-person data-science team with having technology pinpoint and make sense of the data. The team trained machine-learning algorithms to recognize obituary content in those 525 million newspaper pages. It then trained another set of algorithms to detect and index key facts from the obituaries, such as names of the deceased’s spouse and children, birth dates, birthplaces and more.

Ancestry, which has about 3.5 million subscribers, now offers about 262 million obituaries, up from roughly 40 million two years ago. Its database includes about a billion names associated with obituaries, including names of the deceased and their relatives. Besides analyzing the trove of old newspaper pages, the algorithms were also applied to online obituaries coming into Ancestry’s database, making them more searchable. Before the AI overhaul, the roughly 40 million obituaries on Ancestry.com were searchable only by the name of the deceased. That meant a search for “Mary R. Smith,” for instance, would yield obituaries only for people with that name — not other obituaries that mentioned that name as a sibling or child.

Source: Ancestry Taps AI To Sift Through Millions of Obituaries – Slashdot

This Trippy T-Shirt Makes You Invisible to AI

In modern cities, we’re constantly surveilled through CCTV cameras in both public and private spaces, and by companies trying to sell us shit based on everything we do. We are always being watched.

But what if a simple T-shirt could make you invisible to commercial AIs trying to spot humans?

A team of researchers from Northeastern University, IBM, and MIT developed a T-shirt design that hides the wearer from image recognition systems by confusing the algorithms trying to spot people into thinking they’re invisible.

[…]

A T-shirt is a low-barrier way to move around the world unnoticed by AI watchers. Previously, researchers have tried to create adversarial fashion using patches attached to stiff cardboard, so that the design doesn’t distort on soft fabric while the wearer moves. If the design is warped or part of it isn’t visible, it becomes ineffective.

No one’s going to start carrying cardboard patches around, and most of us probably won’t put Juggalo paint on our faces (at least not until everyone’s doing it), so the researchers came up with an approach to account for the ways that moving cloth distorts an image when generating an adversarial design to print on a shirt. As a result, the new shirt allows the wearer to move naturally while (mostly) hiding the person.

It would be easy to dismiss this sort of thing as too far-fetched to become reality. But as more cities around the country push back against facial recognition in their communities, it’s not hard to imagine some kind of hypebeast Supreme x MIT collab featuring adversarial tees to fool people-detectors in the future. Security professional Kate Rose’s shirts that fool Automatic License Plate Readers, for example, are for sale and walking amongst us already.

Source: This Trippy T-Shirt Makes You Invisible to AI – VICE

The ‘Three-Body Problem’ Has Perplexed Astronomers Since Newton Formulated It. A.I. Just Cracked It in Under a Second.

The mind-bending calculations required to predict how three heavenly bodies orbit each other have baffled physicists since the time of Sir Isaac Newton. Now artificial intelligence (A.I.) has shown that it can solve the problem in a fraction of the time required by previous approaches.

Newton was the first to formulate the problem in the 17th century, but finding a simple way to solve it has proved incredibly difficult. The gravitational interactions between three celestial objects like planets, stars and moons result in a chaotic system — one that is complex and highly sensitive to the starting positions of each body.

[…]

The algorithm they built provided accurate solutions up to 100 million times faster than the most advanced software program, known as Brutus.

[…]

Neural networks must be trained by being fed data before they can make predictions. So the researchers had to generate 9,900 simplified three-body scenarios using Brutus, the current leader when it comes to solving three-body problems.

They then tested how well the neural net could predict the evolution of 5,000 unseen scenarios, and found its results closely matched those of Brutus. However, the A.I.-based program solved the problems in an average of just a fraction of a second, compared with nearly 2 minutes.

The reason programs like Brutus are so slow is that they solve the problem by brute force, said Foley, carrying out calculations for each tiny step of the celestial bodies’ trajectories. The neural net, on the other hand, simply looks at the movements those calculations produce and deduces a pattern that can help predict how future scenarios will play out.

That presents a problem for scaling the system up, though, Foley said. The current algorithm is a proof-of-concept and learned from simplified scenarios, but training on more complex ones or even increasing the number of bodies involved to four of five first requires you to generate the data on Brutus, which can be extremely time-consuming and expensive.

Source: The ‘Three-Body Problem’ Has Perplexed Astronomers Since Newton Formulated It. A.I. Just Cracked It in Under a Second. | Live Science

AI allows paralyzed person to ‘handwrite’ with his mind twice as fast as using a cursor to select letters

People who are “locked in”—fully paralyzed by stroke or neurological disease—have trouble trying to communicate even a single sentence. Electrodes implanted in a part of the brain involved in motion have allowed some paralyzed patients to move a cursor and select onscreen letters with their thoughts. Users have typed up to 39 characters per minute, but that’s still about three times slower than natural handwriting.

In the new experiments, a volunteer paralyzed from the neck down instead imagined moving his arm to write each letter of the alphabet. That brain activity helped train a computer model known as a neural network to interpret the commands, tracing the intended trajectory of his imagined pen tip to create letters (above).

Eventually, the computer could read out the volunteer’s imagined sentences with roughly 95% accuracy at a speed of about 66 characters per minute, the team reported here this week at the annual meeting of the Society for Neuroscience.

The researchers expect the speed to increase with more practice. As they refine the technology, they will also use their neural recordings to better understand how the brain plans and orchestrates fine motor movements.

Source: AI allows paralyzed person to ‘handwrite’ with his mind | Science | AAAS

An AI Pioneer Wants His Algorithms to Understand the ‘Why’

In March, Yoshua Bengio received a share of the Turing Award, the highest accolade in computer science, for contributions to the development of deep learning—the technique that triggered a renaissance in artificial intelligence, leading to advances in self-driving cars, real-time speech translation, and facial recognition.

Now, Bengio says deep learning needs to be fixed. He believes it won’t realize its full potential, and won’t deliver a true AI revolution, until it can go beyond pattern recognition and learn more about cause and effect. In other words, he says, deep learning needs to start asking why things happen.

[…]

Machine learning systems including deep learning are highly specific, trained for a particular task, like recognizing cats in images, or spoken commands in audio. Since bursting onto the scene around 2012, deep learning has demonstrated a particularly impressive ability to recognize patterns in data; it’s been put to many practical uses, from spotting signs of cancer in medical scans to uncovering fraud in financial data.

But deep learning is fundamentally blind to cause and effect. Unlike a real doctor, a deep learning algorithm cannot explain why a particular image may suggest disease. This means deep learning must be used cautiously in critical situations.

[…]

At his research lab, Bengio is working on a version of deep learning capable of recognizing simple cause-and-effect relationships. He and colleagues recently posted a research paper outlining the approach. They used a dataset that maps causal relationships between real-world phenomena, such as smoking and lung cancer, in terms of probabilities. They also generated synthetic datasets of causal relationships.

[…]

Others believe the focus on deep learning may be part of the problem. Gary Marcus, a professor emeritus at NYU and the author of a recent book that highlights the limits of deep learning, Rebooting AI: Building Artificial Intelligence We Can Trust, says Bengio’s interest in causal reasoning signals a welcome shift in thinking.

“Too much of deep learning has focused on correlation without causation, and that often leaves deep learning systems at a loss when they are tested on conditions that aren’t quite the same as the ones they were trained on,” he says.

Marcus adds that the lesson from human experience is obvious. “When children ask ‘why?’ they are asking about causality,” he says. “When machines start asking why, they will be a lot smarter.”

Source: An AI Pioneer Wants His Algorithms to Understand the ‘Why’ | WIRED

This is a hugely important – and old – question in this field. Without the ‘why’, humans must ‘just trust’ answers given by AI that seem intuitively strange. When you’re talking about health care or human related activities such as liability ‘just accept what I’m telling you’ isn’t good enough.

TensorFlow 2.0 is now available!

TensorFlow 2.0 is driven by the community telling us they want an easy-to-use platform that is both flexible and powerful, and which supports deployment to any platform. TensorFlow 2.0 provides a comprehensive ecosystem of tools for developers, enterprises, and researchers who want to push the state-of-the-art in machine learning and build scalable ML-powered applications.

Coding with TensorFlow 2.0

TensorFlow 2.0 makes development of ML applications much easier. With tight integration of Keras into TensorFlow, eager execution by default, and Pythonic function execution, TensorFlow 2.0 makes the experience of developing applications as familiar as possible for Python developers. For researchers pushing the boundaries of ML, we have invested heavily in TensorFlow’s low-level API: We now export all ops that are used internally, and we provide inheritable interfaces for crucial concepts such as variables and checkpoints. This allows you to build onto the internals of TensorFlow without having to rebuild TensorFlow.

Source: TensorFlow 2.0 is now available! – TensorFlow – Medium

ETSI launches specification group on Securing Artificial Intelligence

ETSI is pleased to announce the creation of a new Industry Specification Group on Securing Artificial Intelligence (ISG SAI). The group will develop technical specifications to mitigate threats arising from the deployment of AI throughout multiple ICT-related industries. This includes threats to artificial intelligence systems from both conventional sources and other AIs.

The ETSI Securing Artificial Intelligence group was initiated to anticipate that autonomous mechanical and computing entities may make decisions that act against the relying parties either by design or as a result of malicious intent. The conventional cycle of networks risk analysis and countermeasure deployment represented by the Identify-Protect-Detect-Respond cycle needs to be re-assessed when an autonomous machine is involved.

The intent of the ISG SAI is therefore to address 3 aspects of artificial intelligence in the standards domain:

  • Securing AI from attack e.g. where AI is a component in the system that needs defending
  • Mitigating against AI e.g. where AI is the ‘problem’ or is used to improve and enhance other more conventional attack vectors
  • Using AI to enhance security measures against attack from other things e.g. AI is part of the ‘solution’ or is used to improve and enhance more conventional countermeasures.

The purpose of the ETSI ISG SAI is to develop the technical knowledge that acts as a baseline in ensuring that artificial intelligence is secure. Stakeholders impacted by the activity of ETSI’s group include end users, manufacturers, operators and governments.

Source: ETSI – ETSI launches specification group on Securing Artificial Intelligence

AI equal with human experts in medical diagnosis with images, study finds

Artificial intelligence is on a par with human experts when it comes to making medical diagnoses based on images, a review has found.

The potential for artificial intelligence in healthcare has caused excitement, with advocates saying it will ease the strain on resources, free up time for doctor-patient interactions and even aid the development of tailored treatment. Last month the government announced £250m of funding for a new NHS artificial intelligence laboratory.

However, experts have warned the latest findings are based on a small number of studies, since the field is littered with poor-quality research.

One burgeoning application is the use of AI in interpreting medical images – a field that relies on deep learning, a sophisticated form of machine learning in which a series of labelled images are fed into algorithms that pick out features within them and learn how to classify similar images. This approach has shown promise in diagnosis of diseases from cancers to eye conditions.

However questions remain about how such deep learning systems measure up to human skills. Now researchers say they have conducted the first comprehensive review of published studies on the issue, and found humans and machines are on a par.

Prof Alastair Denniston, at the University Hospitals Birmingham NHS foundation trust and a co-author of the study, said the results were encouraging but the study was a reality check for some of the hype about AI.

Dr Xiaoxuan Liu, the lead author of the study and from the same NHS trust, agreed. “There are a lot of headlines about AI outperforming humans, but our message is that it can at best be equivalent,” she said.

Writing in the Lancet Digital Health, Denniston, Liu and colleagues reported how they focused on research papers published since 2012 – a pivotal year for deep learning.

An initial search turned up more than 20,000 relevant studies. However, only 14 studies – all based on human disease – reported good quality data, tested the deep learning system with images from a separate dataset to the one used to train it, and showed the same images to human experts.

The team pooled the most promising results from within each of the 14 studies to reveal that deep learning systems correctly detected a disease state 87% of the time – compared with 86% for healthcare professionals – and correctly gave the all-clear 93% of the time, compared with 91% for human experts.

However, the healthcare professionals in these scenarios were not given additional patient information they would have in the real world which could steer their diagnosis.

Prof David Spiegelhalter, the chair of the Winton centre for risk and evidence communication at the University of Cambridge, said the field was awash with poor research.

“This excellent review demonstrates that the massive hype over AI in medicine obscures the lamentable quality of almost all evaluation studies,” he said. “Deep learning can be a powerful and impressive technique, but clinicians and commissioners should be asking the crucial question: what does it actually add to clinical practice?”

Source: AI equal with human experts in medical diagnosis, study finds | Technology | The Guardian

No Bones about It: People Recognize Objects by Visualizing Their “Skeletons”

Humans effortlessly know that a tree is a tree and a dog is a dog no matter the size, color or angle at which they’re viewed. In fact, identifying such visual elements is one of the earliest tasks children learn. But researchers have struggled to determine how the brain does this simple evaluation. As deep-learning systems have come to master this ability, scientists have started to ask whether computers analyze data—and particularly images—similarly to the human brain. “The way that the human mind, the human visual system, understands shape is a mystery that has baffled people for many generations, partly because it is so intuitive and yet it’s very difficult to program” says Jacob Feldman, a psychology professor at Rutgers University.

A paper published in Scientific Reports in June comparing various object recognition models came to the conclusion that people do not evaluate an object like a computer processing pixels, but based on an imagined internal skeleton. In the study, researchers from Emory University, led by associate professor of psychology Stella Lourenco, wanted to know if people judged object similarity based on the objects’ skeletons—an invisible axis below the surface that runs through the middle of the object’s shape. The scientists generated 150 unique three-dimensional shapes built around 30 different skeletons and asked participants to determine whether or not two of the objects were the same. Sure enough, the more similar the skeletons were, the more likely participants were to label the objects as the same. The researchers also compared how well other models, such as neural networks (artificial intelligence–based systems) and pixel-based evaluations of the objects, predicted people’s decisions. While the other models matched performance on the task relatively well, the skeletal model always won.

“There’s a big emphasis on deep neural networks for solving these problems [of object recognition]. These are networks that require lots and lots of training to even learn a single object category, whereas the model that we investigated, a skeletal model, seems to be able to do this without this experience,” says Vladislav Ayzenberg, a doctoral student in Lourenco’s lab. “What our results show is that humans might be able to recognize objects by their internal skeletons, even when you compare skeletal models to these other well-established neural net models of object recognition.”

Next, the researchers pitted the skeletal model against other models of shape recognition, such as ones that focus on the outline. To do so, Ayzenberg and Lourenco manipulated the objects in certain ways, such as shifting the placement of an arm in relation to the rest of the body or changing how skinny, bulging, or wavy the outlines were. People once again judged the objects as being similar based on their skeletons, not their surface qualities.

Source: No Bones about It: People Recognize Objects by Visualizing Their “Skeletons” – Scientific American

New Data Science Cheat Sheet, by Maverick Lin

Below is an extract of a 10-page cheat sheet about data science, compiled by Maverick Lin. This cheatsheet is currently a reference in data science that covers basic concepts in probability, statistics, statistical learning, machine learning, deep learning, big data frameworks and SQL. The cheatsheet is loosely based off of The Data Science Design Manual by Steven S. Skiena and An Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani. Inspired by William Chen’s The Only Probability Cheatsheet You’ll Ever Need, located here.

Full cheat sheet available here as a PDF document. Originally posted here. The screenshot below is an extract.

For related cheat cheats (machine learning, deep learning and so on) follow this link.

Source: New Data Science Cheat Sheet, by Maverick Lin – Data Science Central

Scammer Successfully Deepfaked CEO’s Voice To Fool Underling Into Transferring $243,000

The CEO of an energy firm based in the UK thought he was following his boss’s urgent orders in March when he transferred funds to a third-party. But the request actually came from the AI-assisted voice of a fraudster.

The Wall Street Journal reports that the mark believed he was speaking to the CEO of his businesses’ parent company based in Germany. The German-accented caller told him to send €220,000 ($243,000 USD) to a Hungarian supplier within the hour. The firm’s insurance company, Euler Hermes Group SA, shared information about the crime with WSJ but would not reveal the name of the targeted businesses.

Euler Hermes fraud expert Rüdiger Kirsch told WSJ that the victim recognized his superior’s voice because it had a hint of a German accent and the same “melody.” This was reportedly the first time Euler Hermes has dealt with clients being affected by crimes that used AI mimicry.

Source: Scammer Successfully Deepfaked CEO’s Voice To Fool Underling Into Transferring $243,000

IBM open sources Adverserial Robustness 360 toolbox for AI

This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. ART provides an implementation for many state-of-the-art methods for attacking and defending classifiers.

Documentation for ART: https://adversarial-robustness-toolbox.readthedocs.io

https://github.com/IBM/adversarial-robustness-toolbox

IBM releases AI Fairness 360 tool open source

The AI Fairness 360 toolkit is an open-source library to help detect and remove bias in machine learning models. The AI Fairness 360 Python package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted.

https://github.com/IBM/AIF360

IBM releases AI Explainability tools

The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.

The AI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The tutorials and example notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created some guidance material and a chart that can be consulted.

Github link

Google  Neural net can spot breast, prostate tumors through microscope

Google Health’s so-called augmented-reality microscope has proven surprisingly accurate at detecting and diagnosing cancerous tumors in real time.

The device is essentially a standard microscope decked out with two extra components: a camera, and a computer running AI software with an Nvidia Titan Xp GPU to accelerate the number crunching. The camera continuously snaps images of body tissue placed under microscope, and passes these images to a convolutional neural network on the computer to analyze. In return, the neural net spits out, in real time allegedly, a heatmap of the cells in the image, labeling areas that are benign and abnormal on the screen for doctors to inspect.

Google’s eggheads tried using the device to detect the presence of cancer in samples of breast and prostate cells. The algorithms had a performance score of 0.92 when detecting cancerous lymph nodes in breast cancer and 0.93 for prostate cancer, with one being a perfect score, so it’s not too bad for what they describe as a proof of concept.

Details of the microscope system have been described in a paper published in Nature this week. The training data for breast cancer was taken from here, and here for prostate cancer. Some of the training data was reserved for inference testing.

The device is a pretty challenging system to build: it requires a processing pipeline that can handle, on the fly, microscope snaps that are high resolution enough to capture details at the cellular level. The size of the images used in this experiment measure 5,120 × 5,120 pixels. That’s much larger than what’s typically used for today’s deep learning algorithms, which have millions of parameters and require billions of floating-point operations just to process images as big as 300 pixels by 300 pixels.

Source: It’s official – Google AI gives you cancer …diagnosis in real time: Neural net can spot breast, prostate tumors • The Register

Mysterious, Ancient Radio Signals Keep Pelting Earth. Astronomers Designed an AI to Hunt Them Down..

Sudden shrieks of radio waves from deep space keep slamming into radio telescopes on Earth, spattering those instruments’ detectors with confusing data. And now, astronomers are using artificial intelligence to pinpoint the source of the shrieks, in the hope of explaining what’s sending them to Earth from — researchers suspect — billions of light-years across space.

Usually, these weird, unexplained signals are detected only after the fact, when astronomers notice out-of-place spikes in their data — sometimes years after the incident. The signals have complex, mysterious structures, patterns of peaks and valleys in radio waves that play out in just milliseconds. That’s not the sort of signal astronomers expect to come from a simple explosion, or any other one of the standard events known to scatter spikes of electromagnetic energy across space. Astronomers call these strange signals fast radio bursts (FRBs). Ever since the first one was uncovered in 2007, using data recorded in 2001, there’s been an ongoing effort to pin down their source. But FRBs arrive at random times and places, and existing human technology and observation methods aren’t well-primed to spot these signals.

Now, in a paper published July 4 in the journal Monthly Notices of the Royal Astronomical Society, a team of astronomers wrote that they managed to detect five FRBs in real time using a single radio telescope. [The 12 Strangest Objects in the Universe]

Wael Farah, a doctoral student at Swinburne University of Technology in Melbourne, Australia, developed a machine-learning system that recognized the signatures of FRBs as they arrived at the University of Sydney’s Molonglo Radio Observatory, near Canberra. As Live Science has previously reported, many scientific instruments, including radio telescopes, produce more data per second than they can reasonably store. So they don’t record anything in the finest detail except their most interesting observations.

Farah’s system trained the Molonglo telescope to spot FRBs and switch over to its most detailed recording mode, producing the finest records of FRBs yet.

Based on their data, the researchers predicted that between 59 and 157 theoretically detectable FRBs splash across our skies every day. The scientists also used the immediate detections to hunt for related flares in data from X-ray, optical and other radio telescopes — in hopes of finding some visible event linked to the FRBs — but had no luck.

Their research showed, however, that one of the most peculiar (and frustrating, for research purposes) traits of FRBs appears to be real: The signals, once arriving, never repeat themselves. Each one appears to be a singular event in space that will never happen again.

Source: Mysterious, Ancient Radio Signals Keep Pelting Earth. Astronomers Designed an AI to Hunt Them Down. | Live Science

AI system ‘should be recognised as inventor’

An artificial intelligence system should be recognised as the inventor of two ideas in patents filed on its behalf, a team of academics says.

The AI has designed interlocking food containers that are easy for robots to grasp and a warning light that flashes in a rhythm that is hard to ignore.

Patents offices insist innovations are attributed to humans – to avoid legal complications that would arise if corporate inventorship were recognised.

The academics say this is “outdated”.

And it could see patent offices refusing to assign any intellectual property rights for AI-generated creations.

As a result, two professors from the University of Surrey have teamed up with the Missouri-based inventor of Dabus AI to file patents in the system’s name with the relevant authorities in the UK, Europe and US.

‘Inventive act’

Dabus was previously best known for creating surreal art thanks to the way “noise” is mixed into its neural networks to help generate unusual ideas.

Unlike some machine-learning systems, Dabus has not been trained to solve particular problems.

Instead, it seeks to devise and develop new ideas – “what is traditionally considered the mental part of the inventive act”, according to creator Stephen Thaler

The first patent describes a food container that uses fractal designs to create pits and bulges in its sides. One benefit is that several containers can be fitted together more tightly to help them be transported safely. Another is that it should be easier for robotic arms to pick them up and grip them.

Image copyright Ryan Abbott
Image caption This diagram shows how a container’s shape could be based on fractals

The second describes a lamp designed to flicker in a rhythm mimicking patterns of neural activity that accompany the formation of ideas, making it more difficult to ignore.

Law professor Ryan Abbott told BBC News: “These days, you commonly have AIs writing books and taking pictures – but if you don’t have a traditional author, you cannot get copyright protection in the US.

“So with patents, a patent office might say, ‘If you don’t have someone who traditionally meets human-inventorship criteria, there is nothing you can get a patent on.’

“In which case, if AI is going to be how we’re inventing things in the future, the whole intellectual property system will fail to work.”

Instead, he suggested, an AI should be recognised as being the inventor and whoever the AI belonged to should be the patent’s owner, unless they sold it on.

However, Prof Abbott acknowledged lawmakers might need to get involved to settle the matter and that it could take until the mid-2020s to resolve the issue.

A spokeswoman for the European Patent Office indicated that it would be a complex matter.

“It is a global consensus that an inventor can only be a person who makes a contribution to the invention’s conception in the form of devising an idea or a plan in the mind,” she explained.

“The current state of technological development suggests that, for the foreseeable future, AI is… a tool used by a human inventor.

“Any change… [would] have implications reaching far beyond patent law, ie to authors’ rights under copyright laws, civil liability and data protection.

“The EPO is, of course, aware of discussions in interested circles and the wider public about whether AI could qualify as inventor.”

The UK’s Patents Act 1977 currently requires an inventor to be a person, but the Intellectual Property Office is aware of the issue.

“The government believes that AI technology could increase the UK’s GDP by 10% in the next decade, and the IPO is focused on responding to the challenges that come with this growth,” said a spokeswoman.

Source: AI system ‘should be recognised as inventor’ – BBC News

Humanitarian Data Exchange

The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. Our growing collection of datasets has been accessed by users in over 200 countries and territories. Watch this video to learn more.

HDX is managed by OCHA’s Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.

[…]

We define humanitarian data as:

  1. data about the context in which a humanitarian crisis is occurring (e.g., baseline/development data, damage assessments, geospatial data)
  2. data about the people affected by the crisis and their needs
  3. data about the response by organisations and people seeking to help those who need assistance.

HDX uses an open-source software called CKAN for our technical back-end. You can find all of our code on GitHub.

Source: Welcome – Humanitarian Data Exchange

How Facebook is Using Machine Learning to Map the World Population

When it comes to knowing where humans around the world actually live, resources come in varying degrees of accuracy and sophistication.

Heavily urbanized and mature economies generally produce a wealth of up-to-date information on population density and granular demographic data. In rural Africa or fast-growing regions in the developing world, tracking methods cannot always keep up, or in some cases may be non-existent.

This is where new maps, produced by researchers at Facebook, come in. Building upon CIESIN’s Gridded Population of the World project, Facebook is using machine learning models on high-resolution satellite imagery to paint a definitive picture of human settlement around the world. Let’s zoom in.

Connecting the Dots

Will all other details stripped away, human settlement can form some interesting patterns. One of the most compelling examples is Egypt, where 95% of the population lives along the Nile River. Below, we can clearly see where people live, and where they don’t.

View the full-resolution version of this map.

facebook population density egypt map

While it is possible to use a tool like Google Earth to view nearly any location on the globe, the problem is analyzing the imagery at scale. This is where machine learning comes into play.

Finding the People in the Petabytes

High-resolution imagery of the entire globe takes up about 1.5 petabytes of storage, making the task of classifying the data extremely daunting. It’s only very recently that technology was up to the task of correctly identifying buildings within all those images.

To get the results we see today, researchers used process of elimination to discard locations that couldn’t contain a building, then ranked them based on the likelihood they could contain a building.

process of elimination map

Facebook identified structures at scale using a process called weakly supervised learning. After training the model using large batches of photos, then checking over the results, Facebook was able to reach a 99.6% labeling accuracy for positive examples.

Why it Matters

An accurate picture of where people live can be a matter of life and death.

For humanitarian agencies working in Africa, effectively distributing aid or vaccinating populations is still a challenge due to the lack of reliable maps and population density information. Researchers hope that these detailed maps will be used to save lives and improve living conditions in developing regions.

For example, Malawi is one of the world’s least urbanized countries, so finding its 19 million citizens is no easy task for people doing humanitarian work there. These maps clearly show where people live and allow organizations to create accurate population density estimates for specific areas.

rural malawi population pattern map

Visit the project page for a full explanation and to access the full database of country maps.

Source: How Facebook is Using Machine Learning to Map the World Population

Meet the AI robots being used to help solve America’s recycling crisis

The way the robots work is simple. Guided by cameras and computer systems trained to recognize specific objects, the robots’ arms glide over moving conveyor belts until they reach their target. Oversized tongs or fingers with sensors that are attached to the arms snag cans, glass, plastic containers, and other recyclable items out of the rubbish and place them into nearby bins.

The robots — most of which have come online only within the past year — are assisting human workers and can work up to twice as fast. With continued improvements in the bots’ ability to spot and extract specific objects, they could become a formidable new force in the $6.6 billion U.S. industry.

Researchers like Lily Chin, a PhD. student at the Distributed Robotics Lab at MIT, are working to develop sensors for these robots that can improve their tactile capabilities and improve their sense of touch so they can determine plastic, paper and metal through their fingers. “Right now, robots are mostly reliant on computer vision, but they can get confused and make mistakes,” says Chin. “So now we want to integrate these new tactile capabilities.”

Denver-based AMP Robotics, is one of the companies on the leading edge of innovation in the field. It has developed software — a AMP Neuron platform that uses computer vision and machine learning — so robots can recognize different colors, textures, shapes, sizes and patterns to identify material characteristics so they can sort waste.

The robots are being installed at the Single Stream Recyclers plant in Sarasota, Florida and they will be able to pick 70 to 80 items a minute, twice as fast as humanly possible and with greater accuracy.

CNBC: trash seperating robot
Bulk Handling Systems Max-AI AQC-C robot
Bulk Handling Systems

“Using this technology you can increase the quality of the material and in some cases double or triple its resale value,” says AMP Robotics CEO Mantaya Horowitz. “Quality standards are getting stricter that’s why companies and researchers are working on high tech solutions.”

Source: Meet the robots being used to help solve America’s recycling crisis

Intellectual Debt (in AI): With Great Power Comes Great Ignorance

For example, aspirin was discovered in 1897, and an explanation of how it works followed in 1995. That, in turn, has spurred some research leads on making better pain relievers through something other than trial and error.

This kind of discovery — answers first, explanations later — I call “intellectual debt.” We gain insight into what works without knowing why it works. We can put that insight to use immediately, and then tell ourselves we’ll figure out the details later. Sometimes we pay off the debt quickly; sometimes, as with aspirin, it takes a century; and sometimes we never pay it off at all.

Be they of money or ideas, loans can offer great leverage. We can get the benefits of money — including use as investment to produce more wealth — before we’ve actually earned it, and we can deploy new ideas before having to plumb them to bedrock truth.

Indebtedness also carries risks. For intellectual debt, these risks can be quite profound, both because we are borrowing as a society, rather than individually, and because new technologies of artificial intelligence — specifically, machine learning — are bringing the old model of drug discovery to a seemingly unlimited number of new areas of inquiry. Humanity’s intellectual credit line is undergoing an extraordinary, unasked-for bump up in its limit.

[…]

Technical debt arises when systems are tweaked hastily, catering to an immediate need to save money or implement a new feature, while increasing long-term complexity. Anyone who has added a device every so often to a home entertainment system can attest to the way in which a series of seemingly sensible short-term improvements can produce an impenetrable rat’s nest of cables. When something stops working, this technical debt often needs to be paid down as an aggravating lump sum — likely by tearing the components out and rewiring them in a more coherent manner.

[…]

Machine learning has made remarkable strides thanks to theoretical breakthroughs, zippy new hardware, and unprecedented data availability. The distinct promise of machine learning lies in suggesting answers to fuzzy, open-ended questions by identifying patterns and making predictions.

[…]

Researchers have pointed out thorny problems of technical debt afflicting AI systems that make it seem comparatively easy to find a retiree to decipher a bank system’s COBOL. They describe how machine learning models become embedded in larger ones and then be forgotten, even as their original training data goes stale and their accuracy declines.

But machine learning doesn’t merely implicate technical debt. There are some promising approaches to building machine learning systems that in fact can offer some explanations — sometimes at the cost of accuracy — but they are the rare exceptions. Otherwise, machine learning is fundamentally patterned like drug discovery, and it thus incurs intellectual debt. It stands to produce answers that work, without offering any underlying theory. While machine learning systems can surpass humans at pattern recognition and predictions, they generally cannot explain their answers in human-comprehensible terms. They are statistical correlation engines — they traffic in byzantine patterns with predictive utility, not neat articulations of relationships between cause and effect. Marrying power and inscrutability, they embody Arthur C. Clarke’s observation that any sufficiently advanced technology is indistinguishable from magic.

But here there is no David Copperfield or Ricky Jay who knows the secret behind the trick. No one does. Machine learning at its best gives us answers as succinct and impenetrable as those of a Magic 8-Ball — except they appear to be consistently right. When we accept those answers without independently trying to ascertain the theories that might animate them, we accrue intellectual debt.

Source: Intellectual Debt: With Great Power Comes Great Ignorance

Waymo and DeepMind mimic evolution to develop a new, better way to train self-driving AI

The two worked together to bring a training method called Population Based Training (PBT for short) to bear on Waymo’s challenge of building better virtual drivers, and the results were impressive — DeepMind says in a blog post that using PBT decreased by 24% false positives in a network that identifies and places boxes around pedestrians, bicyclists and motorcyclists spotted by a Waymo vehicle’s many sensors. Not only that, but is also resulted in savings in terms of both training time and resources, using about 50% of both compared to standard methods that Waymo was using previously.

[…]

To step back a little, let’s look at what PBT even is. Basically, it’s a method of training that takes its cues from how Darwinian evolution works. Neural nets essentially work by trying something and then measuring those results against some kind of standard to see if their attempt is more “right” or more “wrong” based on the desired outcome

[…]

But all that comparative training requires a huge amount of resources, and sorting the good from the bad in terms of which are working out relies on either the gut feeling of individual engineers, or massive-scale search with a manual component involved where engineers “weed out” the worst performing neural nets to free up processing capabilities for better ones.

What DeepMind and Waymo did with this experiment was essentially automate that weeding, automatically killing the “bad” training and replacing them with better-performing spin-offs of the best-in-class networks running the task. That’s where evolution comes in, since it’s kind of a process of artificial natural selection.

Source: Waymo and DeepMind mimic evolution to develop a new, better way to train self-driving AI | TechCrunch

Wow, I hate when people actually write at you to read a sentence again (cut out for your mental wellness).