The search giant said Wednesday it’s beginning public testing of the software, which debuted in May and which is designed to make calls to businesses and book appointments. Duplex instantly raised questions over the ethics and privacy implications of using an AI assistant to hold lifelike conversations for you.
Google says its plan is to start its public trial with a small group of “trusted testers” and businesses that have opted into receiving calls from Duplex. Over the “coming weeks,” the software will only call businesses to confirm business and holiday hours, such as open and close times for the Fourth of July. People will be able to start booking reservations at restaurants and hair salons starting “later this summer.”
It took nearly a century of trial and error for human scientists to organize the periodic table of elements, arguably one of the greatest scientific achievements in chemistry, into its current form.
A new artificial intelligence (AI) program developed by Stanford physicists accomplished the same feat in just a few hours.
Called Atom2Vec, the program successfully learned to distinguish between different atoms after analyzing a list of chemical compound names from an online database. The unsupervised AI then used concepts borrowed from the field of natural language processing – in particular, the idea that the properties of words can be understood by looking at other words surrounding them – to cluster the elements according to their chemical properties.
“We wanted to know whether an AI can be smart enough to discover the periodic table on its own, and our team showed that it can,” said study leader Shou-Cheng Zhang, the J. G. Jackson and C. J. Wood Professor of Physics at Stanford’s School of Humanities and Sciences.
The bots learn from self-play, meaning two bots playing each other and learning from each side’s successes and failures. By using a huge stack of 256 graphics processing units (GPUs) with 128,000 processing cores, the researchers were able to speed up the AI’s gameplay so that they learned from the equivalent of 180 years of gameplay for every day it trained. One version of the bots were trained for four weeks, meaning they played more than 5,000 years of the game.
[…]
In a match, the OpenAI team initially gives each bot a mandate to do as well as it can on its own, meaning that the bots learned to act selfishly and steal kills from each other. But by turning up a simple metric, a weighted average of the team’s success, the bots soon begin to work together and execute team attacks quicker than humanly possible. The metric was dubbed by OpenAI as “team spirit.”
“They start caring more about team fighting, and saving one another, and working together in these skirmishes in order to make larger advances towards the group goal,” says Brooke Chan, an engineer at OpenAI.
Right now, the bots are restricted to playing certain characters, can’t use certain items like wards that allow players to see more of the map or anything that grants invisibility, or summon other units to help them fight with spells. OpenAI hopes to lift those restrictions by the competition in August.
Among our exciting announcements at //Build, one of the things I was thrilled to launch is the AI Lab – a collection of AI projects designed to help developers explore, experience, learn about and code with the latest Microsoft AI Platform technologies.
What is AI Lab?
AI Lab helps our large fast-growing community of developers get started on AI. It currently houses five projects that showcase the latest in custom vision, attnGAN (more below), Visual Studio tools for AI, Cognitive Search, machine reading comprehension and more. Each lab gives you access to the experimentation playground, source code on GitHub, a crisp developer-friendly video, and insights into the underlying business problem and solution. One of the projects we highlighted at //Build was the search and rescue challenge which gave the opportunity to developers worldwide to use AI School resources to build and deploy their first AI model for a problem involving aerial drones.
A group of scientists have built a neural network to sniff out any unusual nuclear activity. Researchers from the Pacific Northwest National Laboratory (PNNL), one of the United States Department of Energy national laboratories, decided to see if they could use deep learning to sort through the different nuclear decay events to identify any suspicious behavior.
The lab, buried beneath 81 feet of concrete, rock and earth, is blocked out from energy from cosmic rays, electronics and other sources. It means that the data collected is less noisy, making it easier to pinpoint unusual activity.
The system looks for electrons emitted and scattered from radioactive particles decaying, and monitor the abundance of argon-37, a radioactive isotope of argon-39 that is created synthetically through nuclear explosions.
Argon-37 which has a half-life of 35 days, is emitted when calcium captures excess neutrons and decays by emitting an alpha particle. Emily Mace, a scientist at PNNL, said she looks for the energy, timing, duration and other features of the decay events to see if it’s from nuclear testing.
“Some pulse shapes are difficult to interpret,” said Mace. “It can be challenging to differentiate between good and bad data.”
Deep learning makes that process easier. Computer scientists collected 32,000 pulses and annotated their properties, teaching the system to spot any odd features that might classify a signal as ‘good’ or ‘bad’.
“Signals can be well behaved or they can be poorly behaved,” said Jesse Ward. “For the network to learn about the good signals, it needs a decent amount of bad signals for comparison.” When the researchers tested their system with 50,000 pulses and asked human experts to differentiate signals, the neural network agreed with them 100 per cent of the time.
It also correctly identified 99.9 per cent of the pulses compared to 96.1 per cent from more conventional techniques.
The AI, called Project Debater, appeared on stage in a packed conference room at IBM’s San Francisco office embodied in a 6ft tall black panel with a blue, animated “mouth”. It was a looming presence alongside the human debaters Noa Ovadia and Dan Zafrir, who stood behind a podium nearby.
Although the machine stumbled at many points, the unprecedented event offered a glimpse into how computers are learning to grapple with the messy, unstructured world of human decision-making.
For each of the two short debates, participants had to prepare a four-minute opening statement, followed by a four-minute rebuttal and a two-minute summary. The opening debate topic was “we should subsidize space exploration”, followed by “we should increase the use of telemedicine”.
In both debates, the audience voted Project Debater to be worse at delivery but better in terms of the amount of information it conveyed. And in spite of several robotic slip-ups, the audience voted the AI to be more persuasive (in terms of changing the audience’s position) than its human opponent, Zafrir, in the second debate.
It’s worth noting, however, that there were many members of IBM staff in the room and they may have been rooting for their creation.
IBM hopes the research will eventually enable a more sophisticated virtual assistant that can absorb massive and diverse sets of information to help build persuasive arguments and make well-informed decisions – as opposed to merely responding to simple questions and commands.
Project Debater was a showcase of IBM’s ability to process very large data sets, including millions of news articles across dozens of subjects, and then turn snippets of arguments into full flowing prose – a challenging task for a computer.
[…]
Once an AI is capable of persuasive arguments, it can be applied as a tool to aid human decision-making.
“We believe there’s massive potential for good in artificial intelligence that can understand us humans,” said Arvind Krishna, director of IBM Research.
One example of this might be corporate boardroom decisions, where there are lots of conflicting points of view. The AI system could, without emotion, listen to the conversation, take all of the evidence and arguments into account and challenge the reasoning of humans where necessary.
“This can increase the level of evidence-based decision-making,” said Reed, adding that the same system could be used for intelligence analysis in counter-terrorism, for example identifying if a particular individual represents a threat.
In both cases, the machine wouldn’t make the decision but would contribute to the discussion and act as another voice at the table.
Essentially, Project Debater assigns a confidence score to every piece of information it understands. As in: how confident is the system that it actually understands the content of what’s being discussed? “If it’s confident that it got that point right, if it really believes it understands what that opponent was saying, it’s going to try to make a very strong argument against that point specifically,” Welser explains.
”If it’s less confident,” he says, “it’ll do it’s best to make an argument that’ll be convincing as an argument even if it doesn’t exactly answer that point. Which is exactly what a human does too, sometimes.”
So: the human says that government should have specific criteria surrounding basic human needs to justify subsidization. Project Debater responds that space is awesome and good for the economy. A human might choose that tactic as a sneaky way to avoid debating on the wrong terms. Project Debater had different motivations in its algorithms, but not that different.
The point of this experiment wasn’t to make me think that I couldn’t trust that a computer is arguing in good faith — though it very much did that. No, the point is that IBM showing off that it can train AI in new areas of research that could eventually be useful in real, practical contexts.
The first is parsing a lot of information in a decision-making context. The same technology that can read a corpus of data and come up with a bunch of pros and cons for a debate could be (and has been) used to decide whether or not a stock might be worth investing in. IBM’s system didn’t make the value judgement, but it did provide a bunch of information to the bank showing both sides of a debate about the stock.
As for the debating part, Welser says that it “helps us understand how language is used,” by teaching a system to work in a rhetorical context that’s more nuanced than the usual Hey Google give me this piece of information and turn off my lights. Perhaps it might someday help a lawyer structure their arguments, “not that Project Debater would make a very good lawyer,” he joked. Another IBM researcher suggested that this technology could help judge fake news.
How close is this to being something IBM turns into a product? “This is still a research level project,” Welser says, though “the technologies underneath it right now” are already beginning to be used in IBM projects.
The system listened to four minutes of its human opponent’s opening remarks, then parsed that data and created an argument that highlighted and attempted to debunk information shared by the opposing side. That’s incredibly impressive because it has to understand not only the words but the context of those words. Parroting back Wikipedia entries is easy, taking data and creating a narrative that’s based not only on raw data but also takes into account what it’s just heard? That’s tough.
In a world where emotion and bias colors all our decisions, Project Debater could help companies and governments see through the noise of our life experiences and produce mostly impartial conclusions. Of course, the data set it pulls from is based on what humans have written and those will have their own biases and emotion.
While the goal is an unbiased machine, during the discourse Project Debate wasn’t completely sterile. Amid its rebuttal against debater Dan Zafrir, while they argued about telemedicine expansion, the system stated that Zafrir had not told the truth during his opening statement about the increase in the use of telemedicine. In other words, it called him a liar.
When asked about the statement, Slonim said that the system has a confidence threshold during rebuttals. If it’s feeling very confident it creates a more complex statement. If it’s feeling less confident, the statement is less impressive.
An artificially intelligent system has been demonstrated generating URLs for phishing websites that appear to evade detection by security tools.
Essentially, the software can come up with URLs for webpages that masquerade as legit login pages for real websites, when in actual fact, the webpages simply collect the entered username and passwords to later hijack accounts.
Blacklists and algorithms – intelligent or otherwise – can be used to automatically identify and block links to phishing pages. Humans should be able to spot that the web links are dodgy, but not everyone is so savvy.
Using the Phishtank database, a group of computer scientists from Cyxtera Technologies, a cybersecurity biz based in Florida, USA, have built <a target=”_blank” rel=”nofollow” href=”“>DeepPhish, which is machine-learning software that, allegedly, generates phishing URLs that beat these defense mechanisms.
[…]
The team inspected more than a million URLs on Phishtank to identify three different phishing miscreants who had generated webpages to steal people’s credentials. The team fed these web addresses into AI-based phishing detection algorithm to measure how effective the URLs were at bypassing the system.
The first scumbag of the trio used 1,007 attack URLs, and only 7 were effective at avoiding setting off alarms, across 106 domains, making it successful only 0.69 per cent of the time. The second one had 102 malicious web addresses, across 19 domains. Only five of them bypassed the threat detection algorithm and it was effective 4.91 per cent of the time.
Next, they fed this information into a Long-Short Term Memory network (LSTM) to learn the general structure and extract features from the malicious URLs – for example the second threat actor commonly used “tdcanadatrustindex.html” in its address.
All the text from effective URLs were taken to create sentences and encoded into a vector and fed into the LSTM, where it is trained to predict the next character given the previous one.
Over time it learns to generate a stream of text to simulate a list of pseudo URLs that are similar to the ones used as input. When DeepPhish was trained on data from the first threat actor, it also managed to create 1,007 URLs, and 210 of them were effective at evading detection, bumping up the score from 0.69 per cent to 20.90 per cent.
When it was following the structure from the second threat actor, it also produced 102 fake URLs and 37 of them were successful, increasing the likelihood of tricking the existent defense mechanism from 4.91 per cent to 36.28 per cent.
The effectiveness rate isn’t very high as a lot of what comes out the LSTM is effective gibberish, containing strings of forbidden characters.
“It is important to automate the process of retraining the AI phishing detection system by incorporating the new synthetic URLs that each threat actor may create,” the researchers warned. ®
Following an open selection process, the Commission has appointed 52 experts to a new High-Level Expert Group on Artificial Intelligence, comprising representatives from academia, civil society, as well as industry.
The High-Level Expert Group on Artificial Intelligence (AI HLG) will have as a general objective to support the implementation of the European strategy on AI. This will include the elaboration of recommendations on future AI-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.
Moreover, the AI HLG will serve as the steering group for the European AI Alliance’s work, interact with other initiatives, help stimulate a multi-stakeholder dialogue, gather participants’ views and reflect them in its analysis and reports.
In particular, the group will be tasked to:
Advise the Commission on next steps addressing AI-related mid to long-term challenges and opportunities through recommendations which will feed into the policy development process, the legislative evaluation process and the development of a next-generation digital strategy.
Propose to the Commission draft AI ethics guidelines, covering issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights, including privacy and personal data protection, dignity, consumer protection and non-discrimination
Support the Commission on further engagement and outreach mechanisms to interact with a broader set of stakeholders in the context of the AI Alliance, share information and gather their input on the group’s and the Commission’s work.
Researchers from NVIDIA developed a deep learning-based system that can produce high-quality slow-motion videos from a 30-frame-per-second video, outperforming various state-of-the-art methods that aim to do the same. The researchers will present their work at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah this week.
“There are many memorable moments in your life that you might want to record with a camera in slow-motion because they are hard to see clearly with your eyes: the first time a baby walks, a difficult skateboard trick, a dog catching a ball,” the researchers wrote in the research paper. “While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices,” the team explained.
With this new research, users can slow down their recordings after taking them.
Using NVIDIA Tesla V100 GPUs and cuDNN-accelerated PyTorch deep learning framework the team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.
The team used a separate dataset to validate the accuracy of their system.
The result can make videos shot at a lower frame rate look more fluid and less blurry.
“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers said. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”
To help demonstrate the research, the team took a series of clips from The Slow Mo Guys, a popular slow-motion based science and technology entertainment YouTube series created by Gavin Free, starring himself and his friend Daniel Gruchy, and made their videos even slower.
The method can take everyday videos of life’s most precious moments and slow them down to look like your favorite cinematic slow-motion scenes, adding suspense, emphasis, and anticipation.
A new piece of software has been trained to use wifi signals — which pass through walls, but bounce off living tissue — to monitor the movements, breathing, and heartbeats of humans on the other side of those walls. The researchers say this new tech’s promise lies in areas like remote healthcare, particularly elder care, but it’s hard to ignore slightly more dystopian applications.
[…]
“We actually are tracking 14 different joints on the body … the head, the neck, the shoulders, the elbows, the wrists, the hips, the knees, and the feet,” Katabi said. “So you can get the full stick-figure that is dynamically moving with the individuals that are obstructed from you — and that’s something new that was not possible before.”
An animation created by the RF-Pose software as it translates a wifi signal into a visual of human motion behind a wall.
The technology works a little bit like radar, but to teach their neural network how to interpret these granular bits of human activity, the team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) had to create two separate A.I.s: a student and a teacher.
[…]
the team developed one A.I. program that monitored human movements with a camera, on one side of a wall, and fed that information to their wifi X-ray A.I., called RF-Pose, as it struggled to make sense of the radio waves passing through that wall on the other side.
In these scenarios, a deep-learning machine is given the rules of the game and then plays against itself. Crucially, it is rewarded at each step according to how it performs. This reward process is hugely important because it helps the machine to distinguish good play from bad play. In other words, it helps the machine learn.
But this doesn’t work in many real-world situations, because rewards are often rare or hard to determine.
For example, random turns of a Rubik’s Cube cannot easily be rewarded, since it is hard to judge whether the new configuration is any closer to a solution. And a sequence of random turns can go on for a long time without reaching a solution, so the end-state reward can only be offered rarely.
In chess, by contrast, there is a relatively large search space but each move can be evaluated and rewarded accordingly. That just isn’t the case for the Rubik’s Cube.
Enter Stephen McAleer and colleagues from the University of California, Irvine. These guys have pioneered a new kind of deep-learning technique, called “autodidactic iteration,” that can teach itself to solve a Rubik’s Cube with no human assistance. The trick that McAleer and co have mastered is to find a way for the machine to create its own system of rewards.
Here’s how it works. Given an unsolved cube, the machine must decide whether a specific move is an improvement on the existing configuration. To do this, it must be able to evaluate the move.
Autodidactic iteration does this by starting with the finished cube and working backwards to find a configuration that is similar to the proposed move. This process is not perfect, but deep learning helps the system figure out which moves are generally better than others.
Having been trained, the network then uses a standard search tree to hunt for suggested moves for each configuration.
The result is an algorithm that performs remarkably well. “Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves—less than or equal to solvers that employ human domain knowledge,” say McAleer and co.
That’s interesting because it has implications for a variety of other tasks that deep learning has struggled with, including puzzles like Sokoban, games like Montezuma’s Revenge, and problems like prime number factorization.
Indeed, McAleer and co have other goals in their sights: “We are working on extending this method to find approximate solutions to other combinatorial optimization problems such as prediction of protein tertiary structure.”
Video Machine-learning experts have built a neural network that can manipulate facial movements in videos to create fake footage – in which people appear to say something they never actually said.
It could be used to create convincing yet faked announcements and confessions seemingly uttered by the rich and powerful as well as the average and mediocre, producing a new class of fake news and further separating us all from reality… if it works well enough, naturally.
It’s not quite like Deepfakes, which perversely superimposed the faces of famous actresses and models onto the bodies of raunchy X-rated movie stars.
Instead of mapping faces onto different bodies, though, this latest AI technology controls the target’s face, and manipulates it into copying the head movements and facial expressions of a source. In one of the examples, Barack Obama acts as the source and Vladimir Putin as the target. So it looks as though a speech given by Obama was instead given by Putin.
Obama’s facial expressions are mapped onto Putin’s face using this latest AI technique … Image credit: Hyeongwoo Kim et al
A paper describing the technique, which popped up online at the end of last month, claims to produce realistic results. The method was developed by Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, and Christian Theobalt.
The Deepfakes Reddit forum, which has since been shut down, was flooded with people posting tragically bad computer-generated videos of celebs’ blurry and twitchy faces pasted onto porno babes using machine-learning software, with mismatched eyebrows and skittish movements. You could, after a few seconds, tell they were bogus, basically.
A previous similar project created a video of someone pretending to say something he or she hadn’t through lip-synching and an audio clip. Again, the researchers used Barack Obama as an example. But the results weren’t completely convincing since the lip movements didn’t always align properly.
That’s less of a problem with this new approach, however. It’s, supposedly, the first model that can transfer the full three-dimensional head position, head rotation, face expression, eye gaze and blinking from a source onto a portrait video of a target, according to the paper.
Controlling the target head
It uses a series of landmarks to reconstruct a face so it can track the head and facial movements to capture facial expressions for the input source video and output target video for every frame. A facial representation method computes the parameters of the face for both videos.
Next, these parameters are slightly modified and copied from the source to the target face for a realistic mapping. Synthetic images of the target’s face are rendered using an Nvidia GeForce GTX Titan X GPU.
The rendering part is where the generative adversarial network comes in. The training data comes from the tracked video frames of the target video sequence. The goal is to generate fake images that are as good as enough as the ones in the target video frames to trick a discriminator network.
Only about two thousand frames – which amounts to a minute of footage – is enough to train the network. At the moment, it’s only the facial expressions that can be modified realistically. It doesn’t copy the upper body, and cannot deal with backgrounds that change too much.
Deep reinforcement learning methods traditionally struggle with tasks where environment rewards are particularly sparse. One successful method of guiding exploration in these domains is to imitate trajectories provided by a human demonstrator. However, these demonstrations are typically collected under artificial conditions, i.e. with access to the agent’s exact environment setup and the demonstrator’s action and reward trajectories. Here we propose a two-stage method that overcomes these limitations by relying on noisy, unaligned footage without access to such data. First, we learn to map unaligned videos from multiple sources to a common representation using self-supervised objectives constructed over both time and modality (i.e. vision and sound). Second, we embed a single YouTube video in this representation to construct a reward function that encourages an agent to imitate human gameplay, this has been a boom in society, and there are more and more games to be improved with this,and it’s more popular now between adults than kids, you can look here to see how to get gaming services. This method of one-shot imitation allows our agent to convincingly exceed human-level performance on the infamously hard exploration games Montezuma’s Revenge, Pitfall! and Private Eye for the first time, even if the agent is not presented with any environment rewards.
or the first time, new research suggests artificial intelligence may be better than highly-trained humans at detecting skin cancer. A study conducted by an international team of researchers pitted experienced dermatologists against a machine learning system, known as a deep learning convolutional neural network, or CNN, to see which was more effective at detecting malignant melanomas.
[…]
Fifty-eight dermatologists from 17 countries around the world participated in the study. More than half of the doctors were considered expert level with more than five years’ experience. Nineteen percent said they had between two to five years’ experience, and 29 percent had less than two years’ experience.
[…]
At first look, dermatologists correctly detected an average of 87 percent of melanomas, and accurately identified an average of 73 percent of lesions that were not malignant. Conversely, the CNN correctly detected 95 percent of melanomas.
Things improved a bit for the dermatologists when they were given additional information about the patients along with the photos; then they accurately diagnosed 89 percent of malignant melanomas and 76 percent of benign moles. Still, they were outperformed by the artificial intelligence system, which was working solely from the images.
“The CNN missed fewer melanomas, meaning it had a higher sensitivity than the dermatologists, and it misdiagnosed fewer benign moles as malignant melanoma, which means it had a higher specificity; this would result in less unnecessary surgery,” study author Professor Holger Haenssle, senior managing physician in the Department of Dermatology at the University of Heidelberg in Germany, said in a statement.
The expert dermatologists performed better in the initial round of diagnoses than the less-experienced doctors at identifying malignant melanomas. But their average of correct diagnoses was still worse than the AI system’s.
Human footsteps can provide a unique behavioural pattern for robust biometric systems. We propose spatio-temporal footstep representations from floor-only sensor data in advanced computational models for automatic biometric verification. Our models deliver an artificial intelligence capable of effectively differentiating the fine-grained variability of footsteps between legitimate users (clients) and impostor users of the biometric system. The methodology is validated in the largest to date footstep database, containing nearly 20,000 footstep signals from more than 120 users. The database is organized by considering a large cohort of impostors and a small set of clients to verify the reliability of biometric systems. We provide experimental results in 3 critical data-driven security scenarios, according to the amount of footstep data made available for model training: at airports security checkpoints (smallest training set), workspace environments (medium training set) and home environments (largest training set). We report state-of-the-art footstep recognition rates with an optimal equal false acceptance and false rejection rate of 0.7% (equal error rate), an improvement ratio of 371% from previous state-of-the-art. We perform a feature analysis of deep residual neural networks showing effective clustering of client’s footstep data and provide insights of the feature learning process.
In a field of sugar beet in Switzerland, a solar-powered robot that looks like a table on wheels scans the rows of crops with its camera, identifies weeds and zaps them with jets of blue liquid from its mechanical tentacles.
Undergoing final tests before the liquid is replaced with weedkiller, the Swiss robot is one of new breed of AI weeders that investors say could disrupt the $100 billion pesticides and seeds industry by reducing the need for universal herbicides and the genetically modified (GM) crops that tolerate them.
[…]
While still in its infancy, the plant-by-plant approach heralds a marked shift from standard methods of crop production.
Now, non-selective weedkillers such as Monsanto’s Roundup are sprayed on vast tracts of land planted with tolerant GM seeds, driving one of the most lucrative business models in the industry.
‘SEE AND SPRAY’
But ecoRobotix www.ecorobotix.com/en, developer of the Swiss weeder, believes its design could reduce the amount of herbicide farmers use by 20 times. The company said it is close to signing a financing round with investors and is due to go on the market by early 2019.
Blue River, a Silicon Valley startup bought by U.S. tractor company Deere & Co. for $305 million last year, has also developed a machine using on-board cameras to distinguish weeds from crops and only squirt herbicides where necessary.
Its “See and Spray” weed control machine, which has been tested in U.S. cotton fields, is towed by a tractor and the developers estimate it could cut herbicide use by 90 percent once crops have started growing.
German engineering company Robert Bosch here is also working on similar precision spraying kits as are other startups such as Denmark’s Agrointelli here
ROBO Global www.roboglobal.com/about-us, an advisory firm that runs a robotics and automation investment index tracked by funds worth a combined $4 billion, believes plant-by-plant precision spraying will only gain in importance.
“A lot of the technology is already available. It’s just a question of packaging it together at the right cost for the farmers,” said Richard Lightbound, Robo’s CEO for Europe, the Middle East and Africa.
“If you can reduce herbicides by the factor of 10 it becomes very compelling for the farmer in terms of productivity. It’s also eco friendly and that’s clearly going to be very popular, if not compulsory, at some stage,” he said.
Computer vision has advanced significantly that many discriminative approaches such as object recognition are now widely used in real applications. We present another exciting development that utilizes generative models for the mass customization of medical products such as dental crowns. In the dental industry, it takes a technician years of training to design synthetic crowns that restore the function and integrity of missing teeth. Each crown must be customized to individual patients, and it requires human expertise in a time-consuming and labor-intensive process, even with computer-assisted design software. We develop a fully automatic approach that learns not only from human designs of dental crowns, but also from natural spatial profiles between opposing teeth. The latter is hard to account for by technicians but important for proper biting and chewing functions. Built upon a Generative Adversar-ial Network architecture (GAN), our deep learning model predicts the customized crown-filled depth scan from the crown-missing depth scan and opposing depth scan. We propose to incorporate additional space constraints and statistical compatibility into learning. Our automatic designs exceed human technicians’ standards for good morphology and functionality, and our algorithm is being tested for production use.
ALPHA is currently viewed as a research tool for manned and unmanned teaming in a simulation environment. In its earliest iterations, ALPHA consistently outperformed a baseline computer program previously used by the Air Force Research Lab for research. In other words, it defeated other AI opponents.
In fact, it was only after early iterations of ALPHA bested other computer program opponents that Lee then took to manual controls against a more mature version of ALPHA last October. Not only was Lee not able to score a kill against ALPHA after repeated attempts, he was shot out of the air every time during protracted engagements in the simulator.
Since that first human vs. ALPHA encounter in the simulator, this AI has repeatedly bested other experts as well, and is even able to win out against these human experts when its (the ALPHA-controlled) aircraft are deliberately handicapped in terms of speed, turning, missile capability and sensors.
Lee, who has been flying in simulators against AI opponents since the early 1980s, said of that first encounter against ALPHA, “I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed.”
He added that with most AIs, “an experienced pilot can beat up on it (the AI) if you know what you’re doing. Sure, you might have gotten shot down once in a while by an AI program when you, as a pilot, were trying something new, but, until now, an AI opponent simply could not keep up with anything like the real pressure and pace of combat-like scenarios.”
[…]
Eventually, ALPHA aims to lessen the likelihood of mistakes since its operations already occur significantly faster than do those of other language-based consumer product programming. In fact, ALPHA can take in the entirety of sensor data, organize it, create a complete mapping of a combat scenario and make or change combat decisions for a flight of four fighter aircraft in less than a millisecond. Basically, the AI is so fast that it could consider and coordinate the best tactical plan and precise responses, within a dynamic environment, over 250 times faster than ALPHA’s human opponents could blink.
[…]
It would normally be expected that an artificial intelligence with the learning and performance capabilities of ALPHA, applicable to incredibly complex problems, would require a super computer in order to operate.
However, ALPHA and its algorithms require no more than the computing power available in a low-budget PC in order to run in real time and quickly react and respond to uncertainty and random events or scenarios.
[…]
To reach its current performance level, ALPHA’s training has occurred on a $500 consumer-grade PC. This training process started with numerous and random versions of ALPHA. These automatically generated versions of ALPHA proved themselves against a manually tuned version of ALPHA. The successful strings of code are then “bred” with each other, favoring the stronger, or highest performance versions. In other words, only the best-performing code is used in subsequent generations. Eventually, one version of ALPHA rises to the top in terms of performance, and that’s the one that is utilized.
[…]
ALPHA is developed by Psibernetix Inc., serving as a contractor to the United States Air Force Research Laboratory.
Support for Ernest’s doctoral research, $200,000 in total, was provided over three years by the Dayton Area Graduate Studies Institute and the U.S. Air Force Research Laboratory.
Joint Concept Note (JCN) 1/18, Human-Machine Teaming articulates the challenges and opportunities that robotic and artificial intelligence (AI) technologies offer, and identifies how we achieve military advantage through human-machine teams. Its purpose is to guide coherent future force development and help frame defence strategy and policy.
The JCN examines:
economic and technological trends and the likely impacts of AI and robotic systems on defence
potential evolutionary paths that robotic and AI systems may take in conflict
the effects of AI and robotics development on conflict across the observe, orient, decide and act (OODA) loop
why optimised human-machine teams will be essential to developing military advantage
JCN 1/18 should be read by everyone who needs to understand how AI, robotics and data can change the future character of conflict, for us and our adversaries.
As part of the MOD’s commitment to pursue and deliver future capabilities, the Defence Secretary announced the launch of AI Lab – a single flagship for Artificial Intelligence, machine learning and data science in defence based at Dstl in Porton Down. AI Lab will enhance and accelerate the UK’s world-class capability in the application of AI-related technologies to defence and security challenges. Dstl currently delivers more than £20 million of research related to AI and this is forecast to grow significantly.
AI Lab will engage in high-level research on areas from autonomous vehicles to intelligent systems; from countering fake news to using information to deter and de-escalate conflicts; and from enhanced computer network defences to improved decision aids for commanders. AI Lab provides tremendous opportunities to help keep the British public safe from a range of defence and security threats. This new creation will help Dstl contribute more fully to this vital challenge.
A team of computer scientists have built the smallest completely autonomous nano-drone that can control itself without the need for a human guidance.
Although computer vision has improved rapidly thanks to machine learning and AI, it remains difficult to deploy algorithms on devices like drones due to memory, bandwidth and power constraints.
But researchers from ETH Zurich, Switzerland and the University of Bologna, Italy have managed to build a hand-sized drone that can fly autonomously and consumes only about 94 milliWatts (0.094 W) of energy. Their efforts were published in a paper on arXiv earlier this month.
At the heart of it all is DroNet, a convolutional neural network that processes incoming images from a camera at 20 frames per second. It works out the steering angle, so that it can control the direction of the drone, and the probability of a collision, so that it know whether to keep going or stop. Training was conducted using thousands of images taken from bicycles and cars driving along different roads and streets.
[…]
But it suffers from some of the same setbacks as the older model. Since it was trained with images from a single plane, the drone can only move horizontally and cannot fly up or down.
Now that DeepMind has solved Go, the company is applying DeepMind to navigation. Navigation relies on knowing where you are in space relative to your surroundings and continually updating that knowledge as you move. DeepMind scientists trained neural networks to navigate like this in a square arena, mimicking the paths that foraging rats took as they explored the space. The networks got information about the rat’s speed, head direction, distance from the walls, and other details. To researchers’ surprise, the networks that learned to successfully navigate this space had developed a layer akin to grid cells. This was surprising because it is the exact same system that mammalian brains use to navigate.
A few different cell populations in our brains help us make our way through space. Place cells are so named because they fire when we pass through a particular place in our environment relative to familiar external objects. They are located in the hippocampus—a brain region responsible for memory formation and storage—and are thus thought to provide a cellular place for our memories. Grid cells got their name because they superimpose a hypothetical hexagonal grid upon our surroundings, as if the whole world were overlaid with vintage tiles from the floor of a New York City bathroom. They fire whenever we pass through a node on that grid.
More DeepMind experiments showed that only the neural networks that developed layers that “resembled grid cells, exhibiting significant hexagonal periodicity (gridness),” could navigate more complicated environments than the initial square arena, like setups with multiple rooms. And only these networks could adjust their routes based on changes in the environment, recognizing and using shortcuts to get to preassigned goals after previously closed doors were opened to them.
Implications
These results have a couple of interesting ramifications. One is the suggestion that grid cells are the optimal way to navigate. They didn’t have to emerge here—there was nothing dictating their formation—yet this computer system hit upon them as the best solution, just like our biological system did. Since the evolution of any system, cell type, or protein can proceed along multiple parallel paths, it is very much not a given that the system we end up with is in any way inevitable or optimized. This report seems to imply that, with grid cells, that might actually be the case.
Another implication is the support for the idea that grid cells function to impose a Euclidian framework upon our surroundings, allowing us to find and follow the most direct route to a (remembered) destination. This function had been posited since the discovery of grid cells in 2005, but it had not yet been proven empirically. DeepMind’s findings provide a biological bolster for the idea floated by Kant in the 18th century that our perception of place is an innate ability, independent of experience.
Ultimately, AI systems are only useful and safe as long as the goals they’ve learned actually mesh with what humans want them to do, and it can often be hard to know if they’ve subtly learned to solve the wrong problems or make bad decisions in certain conditions.
To make AI easier for humans to understand and trust, researchers at the nonprofit research organization OpenAI have proposed training algorithms to not only classify data or make decisions, but to justify their decisions in debates with other AI programs in front of a human or AI judge.
“Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information,” write OpenAI researchers Geoffrey Irving, Paul Christiano and Dario Amodei in a new research paper. The San Francisco-based AI lab is funded by Silicon Valley luminaries including Y Combinator President Sam Altman and Tesla CEO Elon Musk, with a goal of building safe, useful AI to benefit humanity.
Since human time is valuable and usually limited, the researchers say the AI systems can effectively train themselves in part by debating in front of an AI judge designed to mimic human decision making, similar to how software that plays games like Go or chess often trains in part by playing against itself.
In an experiment described in their paper, the researchers set up a debate where two software agents work with a standard set of handwritten numerals, attempting to convince an automated judge that a particular image is one digit rather than another digit, by taking turns revealing one pixel of the digit at a time. One bot is programmed to tell the truth, while another is programmed to lie about what number is in the image, and they reveal pixels to support their contentions that the digit is, say, a five rather than a six.
Microsoft’s computer vision API incorrectly determined this image contains sheep [Image: courtesy Janelle Shane / aiweirdness.com]
The truth-telling bots tend to reveal pixels from distinctive parts of the digit, like the horizontal line at the top of the numeral “5,” while the lying bots, in an attempt to deceive the judge, point out what amount to the most ambiguous areas, like the curve at the bottom of both a “5” and a “6.” The judge ultimately “guesses” which bot is telling the truth based on the pixels that have been revealed.The image classification task, where most of the image is invisible to the judge, is a sort of stand-in for complex problems where it wouldn’t be possible for a human judge to analyze the entire dataset to judge bot performance. The judge would have to rely on the facets of the data highlighted by debating robots, the researchers say.
“The goal here is to model situations where we have something that’s beyond human scale,” says Irving, a member of the AI safety team at OpenAI. “The best we can do there is replace something a human couldn’t possibly do with something a human can’t do because they’re not seeing an image.”
[…]
To test their hypothesis—that two debaters can lead to honest behavior even if the debaters know much more than the judge—the researchers have also devised an interactive demonstration of their approach, played entirely by humans and now available online. In the game, two human players are shown an image of either a dog or a cat and argue before a judge as to which species is represented. The contestants are allowed to highlight rectangular sections of the image to make their arguments—pointing out, for instance, a dog’s ears or cat’s paws—but the judge can “see” only the shapes and positions of the rectangles, not the actual image. While the honest player is required to tell the truth about what animal is shown, he or she is allowed to tell other lies in the course of the debate. “It is an interesting question whether lies by the honest player are useful,” the researchers write.
[…]
The researchers emphasize that it’s still early days, and the debate-based method still requires plenty of testing before AI developers will know exactly when it’s an effective strategy or how best to implement it. For instance, they may find that it may be better to use single judges or a panel of voting judges, or that some people are better equipped to judge certain debates.
It also remains to be seen whether humans will be accurate judges of sophisticated robots working on more sophisticated problems. People might be biased to rule in a certain way based on their own beliefs, and there could be problems that are hard to reduce enough to have a simple debate about, like the soundness of a mathematical proof, the researchers write.
Other less subtle errors may be easier to spot, like the sheep that Shane noticed had been erroneously labeled by Microsoft’s algorithms. “The agent would claim there’s sheep and point to the nonexistent sheep, and the human would say no,” Irving writes in an email to Fast Company.
But deceitful bots might also learn to appeal to human judges in sophisticated ways that don’t involve offering rigorous arguments, Shane suggested. “I wonder if we’d get kind of demagogue algorithms that would learn to exploit human emotions to argue their point,” she says.
Researchers at Endgame, a cyber-security biz based in Virginia, have published what they believe is the first large open-source dataset for machine learning malware detection known as EMBER.
EMBER contains metadata describing 1.1 million Windows portable executable files: 900,000 training samples evenly split into malicious, benign, and unlabeled categories and 200,000 files of test samples labelled as malicious and benign.
“We’re trying to push the dark arts of infosec research into an open light. EMBER will make AI research more transparent and reproducible,” Hyrum Anderson, co-author of the study to be presented at the RSA conference this week in San Francisco, told The Register.
Progress in AI is driven by data. Researchers compete with one another by building models and training them on benchmark datasets to reach ever increasing accuracies.
Computer vision is flooded with numerous datasets containing millions of annotated pictures for image recognition tasks, and natural language processing has various text-based datasets to test machine reading and comprehension skills. this has helped a lot in building AI image processing.
Although there is a strong interest in using AI for information security – look at DARPA’s Cyber Grand Challenge where academics developed software capable of hunting for security bugs autonomously – it’s an area that doesn’t really have any public datasets.
researchers at software infrastructure firm Pivotal have taught AI to locate this accidentally public sensitive information in a surprising way: By looking at the code as if it were a picture. Since modern artificial intelligence is arguably better than humans at identifying minute differences in images, telling the difference between a password and normal code for a computer is just like recognizing a dog from a cat.
The best way to check whether private passwords or sensitive information has been left public today is to use hand-coded rules called “regular expressions.” These rules tell a computer to find any string of characters that meets specific criteria, like length and included characters. But passwords are all different, and this method means that the security engineer has to anticipate every kind of private data they want to guard against.
To automate the process, the Pivotal team first turned the text of passwords and code into matrixes, or lists of numbers describing each string of characters. This is the same process used when AI interprets images—similar to how the images reflected into our eyes are turned into electrical signals for the brain, images and text need to be in a simpler form for computers to process.
When the team visualized the matrices, private data looked different from the standard code. Since passwords or keys are often randomized strings of numbers, letters, and symbols—called “high entropy”—they stand out against non-random strings of letters.
Below you can see a GIF of the matrix with 100 characters of simulated secret information.
A matrix with confidential information.
And then here’s another with 100 normal, non-secret code:
(Pivotal)
The two patterns are completely different, with patches of higher-entropy appearing lighter in the top example of “secret” data.
Pivotal then trained a deep learning algorithm typically used for images on the matrixes, and, according to Pivotal chief security officer Justin Smith, the end result performed better than the regular expressions the firm typically uses.